Please join the Debezium community on Zulip (https://debezium.zulipchat.com). This room is not used any longer.
.. debezium mysql connector losing the GTID binlog position every-time after restart. Wondering is there fix available for this ?
"MySQL current GTID set 03e063ff-0fd2-11ec-b44a-42010a65001c:1-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 does contain the GTID set required by the connector 03e063ff-0fd2-11ec-b44a-42010a65001c:168185238-171001552
Server has already purged 03e063ff-0fd2-11ec-b44a-42010a65001c:1-163307835,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 GTIDs
GTIDs known by the server but not processed yet 03e063ff-0fd2-11ec-b44a-42010a65001c:1-168185237:171001553-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644, for replication are available only 03e063ff-0fd2-11ec-b44a-42010a65001c:163307836-168185237:171001553-172385422
Some of the GTIDs needed to replicate have been already purged
Stopping down connector"
Going through the chat, the same issue is described in below post which is archived. I can not see the replies. So any help would be appreciated.
https://gitter.im/debezium/user/archives/2021/03/30
Target: Manually sending 7th offset with same key/content as 2nd offset to offset-topic in order to read messages from postgres as from offset 2 to 6
Reality: Connector only reads the 6th offset (the last one), then keeps going.
Can anyone tell me what I did wrong here? Here is my connector config:
{
"name": "rev_msa_mylgdb_local_1",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "localhost",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname": "msa_mylgdb",
"database.server.name": "REV_msa_mylgdb",
"table.include.list": "smartux.pt_ux_pairing",
"plugin.name": "pgoutput",
"snapshot.mode": "never",
"decimal.handling.mode": "double",
"time.precision.mode": "connect",
"binary.handling.mode": "hex",
"datatype.propagate.source.type": ".+\.BYTEA",
"slot.name": "rev_msa_mylgdb_1",
"tombstones.on.delete":"false"
}
}
Diwakar Mishra1:17 AM
Hi All
I'm using debezium mysql connector using Amazon MSK connect, it works fine but when the connector restarts it starts giving error, "Skipping invalid database history record", ... "This is often not an issue, but if it happens repeatedly please check the 'mysql-database.db_name.table_name' topic. (io.debezium.relational.history.KafkaDatabaseHistory:306)".
And it continuously gives that error, and I need to create new connector every time after restart
Could anyone please help me find solution for this?
Thanks
Hello,
I am using debezium-connector-postgres with Azure Postgres. However, when I try to perform the put to create the connector I always get the following error:
JdbcConnectionException: ERROR: replication slot "debezium" already exists
The problem is that before performng the POST I performed the following query: select * from pg_replication_slots;
And it was empty before the POST. So I don't understand why it created the replication slot and then complains that it already exists. Maybe has something to do with the fact that I'm using Azure Postgres. Any help?
Thank you
we had a debzium connector failure related to "ParsingException: DDL statement couldn't be parsed"
here is the full error message:
{
"name": "prd2_cdc_ticket_v2",
"connector": {
"state": "RUNNING",
"worker_id": "papp-confluent-connect3a.42.wixprod.net:8083"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "papp-confluent-connect1b.42.wixprod.net:8083",
"trace": "io.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'wix_connect
.tickets
to wix_connect
._tickets_del
'\nmismatched input 'wix_connect
' expecting {<EOF>, 'ALTER', 'ANALYZE', 'CALL', 'CHANGE', 'CHECK', 'CREATE', 'DELETE', 'DESC', 'DESCRIBE', 'DROP', 'EXPLAIN', 'GET', 'GRANT', 'INSERT', 'KILL', 'LOAD', 'LOCK', 'OPTIMIZE', 'PURGE', 'RELEASE', 'RENAME', 'REPLACE', 'RESIGNAL', 'REVOKE', 'SELECT', 'SET', 'SHOW', 'SIGNAL', 'UNLOCK', 'UPDATE', 'USE', 'BEGIN', 'BINLOG', 'CACHE', 'CHECKSUM', 'COMMIT', 'DEALLOCATE', 'DO', 'FLUSH', 'HANDLER', 'HELP', 'INSTALL', 'PREPARE', 'REPAIR', 'RESET', 'ROLLBACK', 'SAVEPOINT', 'START', 'STOP', 'TRUNCATE', 'UNINSTALL', 'XA', 'EXECUTE', 'SHUTDOWN', '--', '(', ';'}\n\tat io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:43)\n\tat org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)\n\tat org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportInputMismatch(DefaultErrorStrategy.java:327)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:139)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:905)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:72)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:45)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.lambda$recover$1(AbstractDatabaseHistory.java:134)\n\tat io.debezium.relational.history.KafkaDatabaseHistory.recoverRecords(KafkaDatabaseHistory.java:307)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.recover(AbstractDatabaseHistory.java:101)\n\tat io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:49)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.validateAndLoadDatabaseHistory(MySqlConnectorTask.java:311)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:96)\n\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: org.antlr.v4.runtime.InputMismatchException\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:880)\n\t... 18 more\n"
}
],
"type": "source"
}
eventually we had to recreate the connector with a new name and skip snapshot phase (huge table). we ended up loosing about 2h of binlog records.
what are the options we have to recover the lost timeframe. the binlog messages are still in the binlog, as binlog retention is 3 months
however, we only need to load the lost timeframe
any advise would be much appreciated