Please join the Debezium community on Zulip (https://debezium.zulipchat.com). This room is not used any longer.
@Naros I was able to connect to the database, but now I am facing another problem, the connector is running a 'setSessionToPdb' function and I'm taking the error below:
Caused by: Error: 2248, Position: 18, Sql = alter session set container = DBZUSER, OriginalSql = alter session set container = DBZUSER, Error Msg = ORA-02248: invalid option for ALTER SESSION
database.pdb.name
configuration option.
Great. Thank you for your input.
In terms of poll.interval.ms
I'm not sure what the expected impact might be. Should I increase it or decrease it and what for?
Hi there. I saw the same issue several times here, but still no solution.
My debezium mysql connector v1.0.0.Final "loses" binlog position after every restart.
I always receive an exception on each restart:
The connector is trying to read binlog starting at GTIDs 62b708e1-4916-11ea-af5e-42010a4ca04c:5196111034-5227084359 and binlog file 'mysql-bin.088385', pos=46414474, skipping 2 events plus 1 rows, but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed.
... Stack trace here
But We always fix it by editing offsets topic. We take the last record in this topic and republish it CHANGING gtids to null. And then it works and no gaps in data (we never seen them)
i.e. record from offsets topic: {"ts_sec":1617071154,"file":"mysql-bin.088421","pos":59304882,"gtids":"62b708e1-4916-11ea-af5e-42010a4ca04c:5228299190-5228458870","row":9,"server_id":953753795,"event":98685}
After edit (restarts fine): {"ts_sec":1617071154,"file":"mysql-bin.088421","pos":59304882,"gtids":null,"row":9,"server_id":953753795,"event":98685}
Sometimes its not enough, than we additionally set event
to 0.
[2021-03-29 21:29:08,404] INFO [Consumer clientId=mysql_connector_google_stage-dbhistory, groupId=mysql_connector_google_stage-dbhistory] Member mysql_connector_google_stage-dbhistory-ee0bbb50-fad0-49e0-9552-38924857d23a sending LeaveGroup request to coordinator 116.202.81.235:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:879)
[2021-03-29 21:29:08,624] INFO MySQL current GTID set 3b16c742-1183-11e8-8cb4-3497f65a102f:1-9556115497,443731d5-cf5e-11e7-9479-2c4d54466ca9:1-282778390,51aa0127-3381-11ea-8a7d-e4434b9771b8:1-1520234595,62b708e1-4916-11ea-af5e-42010a4ca04c:1-5227227153,7ffd66b7-3701-11ea-a965-e4434b96a6c8:1-49776,97ab08ad-2487-11ea-971f-42010a9c008f:1-5271:5273-196197,a206b385-291a-11ea-9eb7-42010a9c0060:1-179:181-391420 does contain the GTID set required by the connector 62b708e1-4916-11ea-af5e-42010a4ca04c:5196111034-5227084359 (io.debezium.connector.mysql.MySqlConnectorTask:512)
[2021-03-29 21:29:08,629] INFO GTIDs known by the server but not processed yet 3b16c742-1183-11e8-8cb4-3497f65a102f:1-9556115497,443731d5-cf5e-11e7-9479-2c4d54466ca9:1-282778390,51aa0127-3381-11ea-8a7d-e4434b9771b8:1-1520234595,62b708e1-4916-11ea-af5e-42010a4ca04c:1-5196111033:5227084360-5227227153,7ffd66b7-3701-11ea-a965-e4434b96a6c8:1-49776,97ab08ad-2487-11ea-971f-42010a9c008f:1-5271:5273-196197,a206b385-291a-11ea-9eb7-42010a9c0060:1-179:181-391420, for replication are available only 62b708e1-4916-11ea-af5e-42010a4ca04c:5139615322-5196111033:5227084360-5227227153 (io.debezium.connector.mysql.MySqlConnectorTask:517)
[2021-03-29 21:29:08,630] INFO Some of the GTIDs needed to replicate have been already purged (io.debezium.connector.mysql.MySqlConnectorTask:519)
[2021-03-29 21:29:08,630] INFO Stopping MySQL connector task (io.debezium.connector.mysql.MySqlConnectorTask:446)
[2021-03-29 21:29:08,630] INFO WorkerSourceTask{id=mysql_connector_google_stage-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:398)
[2021-03-29 21:29:08,630] INFO WorkerSourceTask{id=mysql_connector_google_stage-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:415)
[2021-03-29 21:29:08,630] ERROR WorkerSourceTask{id=mysql_connector_google_stage-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: The connector is trying to read binlog starting at GTIDs 62b708e1-4916-11ea-af5e-42010a4ca04c:5196111034-5227084359 and binlog file 'mysql-bin.088385', pos=46414474, skipping 2 events plus 1 rows, but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed.
at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:132)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:199)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
[2021-03-29 21:29:08,631] ERROR WorkerSourceTask{id=mysql_connector_google_stage-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
[2021-03-29 21:29:08,631] INFO Stopping MySQL connector task (io.debezium.connector.mysql.MySqlConnectorTask:446)
[2021-03-29 21:29:08,631] INFO [Producer clientId=connector-producer-mysql_connector_google_stage-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
62b708e1-4916-11ea-af5e-42010a4ca04c
GTIDs has been purged befor the connector start? It seems to me there is a gap there in the GTID range
it says:
MySQL has 1-5227227153 offsets for this GTID
Last commited offsets range I have for this GTID: 5196111034-5227084359.
And it totally fits on available offsets.
So why it can not read binlog from 5227084359 (last commited range end)
"initial - the connector runs a snapshot only when no offsets have been recorded for the logical server name."
can you elaborate a bit on - no offsets have been recorded for the logical server name.
initial
default mode which is where the connector upon startup checks to see if we've recorded any offsets in Kafka. If no offsets exist, we then proceed with Snapshot -> Streaming. If offsets are found, then we simply skip Snapshot and go right into Streaming.
SnapshotMode
enum called initial_only
. You could follow that as a guide for what to do for MySQL.
true
/false
.
internal.implementation=legacy
as a connector option and it's still a beta feature. We're hoping to have more time to dedicate to better snapshot options across all connectors in a release later this year.
TypeRegistry
, particularly if you have lots of custom data types.Hello,
I set up a MySQL Source connector for couple of our tables, and it seems as though the topics aren't created. I do see the history topic created but no table topics.
On checking WorkerSourceTask logging, I got the below message.
WorkerSourceTask{id=<connector_name>-0} flushing 0 outstanding messages for offset commit
Here are my config for this connector.
connector.class=io.debezium.connector.mysql.MySqlConnector
snapshot.locking.mode=minimal
transforms.unwrap.delete.handling.mode=rewrite
transforms.AddPrefix.type=org.apache.kafka.connect.transforms.RegexRouter
tasks.max=1
database.history.kafka.topic=<prod_logical_name>_historic_v51
transforms=unwrap,dropPrefix,AddPrefix
transforms.dropPrefix.regex=prod_logical_name.<db_name>.(.*)
table.whitelist=db_name.table_name_01,db_name.table_name_02
transforms.AddPrefix.replacement=<prod_analytics>_$0
database.jdbc.driver=com.mysql.cj.jdbc.Driver
decimal.handling.mode=double
transforms.AddPrefix.regex=.*
snapshot.new.tables=parallel
offset_flush_timeout_ms=10000
database.history.skip.unparseable.ddl=true
heartbeat.topics.prefix=debezium-heartbeat
transforms.unwrap.type=io.debezium.transforms.UnwrapFromEnvelope
database.whitelist=<db_name>
snapshot.fetch.size=100000
transforms.dropPrefix.replacement=$1
bigint.unsigned.handling.mode=long
database.user=kafka_readonly
database.server.id=5501
database.history.kafka.bootstrap.servers=<host>:9092
time.precision.mode=connect
database.server.name=<prod_logical_name>
errors.retry.delay.max.ms=60000
transforms.dropPrefix.type=org.apache.kafka.connect.transforms.RegexRouter
database.port=3306
inconsistent.schema.handling.mode=warn
offset_flush_interval_ms=60000
database.serverTimezone=UTC
database.hostname=<db_host>
database.password=<db_pwd>
errors.tolerance=all
database.history=io.debezium.relational.history.KafkaDatabaseHistory
snapshot.mode=initial
Unfortunately, this is in production so this connector follows others in terms of configuration. But I'll see if I can just use a simple config for the time being.
I should also mention that we are missing few tables in other connectors so it's either the transformations or older version, as you suggest.
I'll report back on what I find.
{
"name": "shipment-order-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "{host}",
"database.port": "5432",
"database.user": "{user}",
"database.password": "{password}",
"database.dbname": "esprinter",
"database.server.name": "shipment_order",
"database.sslmode": "require",
"database.tcpKeepAlive": "true",
"slot.name": "shipment_order",
"plugin.name": "wal2json_streaming",
"snapshot.mode": "never",
"schema.include.list": "{schema}",
"table.include.list": "{tables}",
"transforms": "replaceField,routeRecords,SetSchemaMetadata",
"transforms.replaceField.type": "org.apache.kafka.connect.transforms.ReplaceField$Key",
"transforms.replaceField.blacklist": "id",
"transforms.SetSchemaMetadata.type": "org.apache.kafka.connect.transforms.SetSchemaMetadata$Key",
"transforms.SetSchemaMetadata.schema.name": "shipment_order.esprinter_data.shipment_order.Key",
"transforms.SetSchemaMetadata.schema.version": "1",
"transforms.routeRecords.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.routeRecords.regex": "(.*)",
"transforms.routeRecords.replacement": "debezium.esprinter_data.shipment_order_all"
}
}
Hello debezium devs
We were considering to build a quarkus microservice with the outbox pattern and found out that there is a designated quarkus extension to support outbox pattern.
Unfortunately, the extension documentation states that this extension is in incubating state. Is there any near future plans for this extension release?
snapshot.mode
has something like always to always generate a snapshot. If it doesn't have such a setting, then you'd need to either change the connector's name or remove the offsets from Kafka so that the snapshot happens again.
@Naros -
Hi Chris, so I created a new connector using the very basic config from the tutorial on DZM documentation, and I'm getting the below error from the task.
Caused by: org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data from topic
Schema for this table has been unchanged since inception (this is a lkp table with just 2 columns - I wanted to test this with the lkp table first).
Any pointers as to what could cause this?
connector.class=io.debezium.connector.mysql.MySqlConnector
database.user=<db_user>
database.server.id=5510
tasks.max=1
database.history.kafka.bootstrap.servers=<host>:9092
database.history.kafka.topic=historic_topic_v54
database.server.name=<logical_server_name>
database.port=3306
table.whitelist=<db_name>.<table_name>
database.hostname=<db_pwd>
database.whitelist=<db_name>
I think it's the new history_topic, I might have masked it here. I changed server.id and history.topic as well. But let me try new history topic, just to confirm.
it's ok to update the existing connector's history topic, right?
Lazy internet question: I'm setting up a Dockerfile for Debezium Connect using Oracle (logminer) and Azure EventHubs (just to give the overall picture).
I know I need to put the ojdbc.jar and instantclient in the image, but since I'm using LogMiner, do I need to copy the xstreams.jar over to libs? I expect not but wanted to double check.
Also... looking here:
https://github.com/debezium/debezium-examples/blob/master/tutorial/debezium-with-oracle-jdbc/Dockerfile
Do I need to yum install libaio?
xstream.jar
is not required to run the Oracle connector when using the LogMiner adapter.
debezium/connect
image to eventually have Oracle baked in like other connectors with no need to add any jars.
@Noras
Chris, so ended up creating a brand new connector and new history topic, server.id etc. but still receiving the same error as above.
I checked in the schema registry (using curl -X GET http://localhost:8081/subjects
) as well and confirmed the table I'm trying to add, has no subject in there.
database.characterEncoding=latin
as described in https://debezium.io/documentation/reference/connectors/mysql.html#mysql-pass-through-properties-for-database-driversStarting snapshot for jdbc:mysql://my_host:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&