Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jul 03 03:19
    vivekwassan synchronize #3657
  • Jul 03 03:17
    vivekwassan synchronize #3657
  • Jul 03 03:16
    vivekwassan synchronize #3657
  • Jul 01 20:32
    roldanbob edited #3666
  • Jul 01 20:31
    roldanbob edited #3666
  • Jul 01 16:49
    Naros closed #3677
  • Jul 01 15:49
    MartinMedek synchronize #3545
  • Jul 01 15:20
    MartinMedek synchronize #3545
  • Jul 01 15:02
    MartinMedek synchronize #3545
  • Jul 01 14:06
    MartinMedek synchronize #3545
  • Jul 01 13:57
    MartinMedek synchronize #3545
  • Jul 01 13:53
    MartinMedek synchronize #3545
  • Jul 01 13:44
    MartinMedek synchronize #3545
  • Jul 01 13:27
    MartinMedek synchronize #3545
  • Jul 01 12:53
    MartinMedek synchronize #3545
  • Jul 01 12:15
    jpechane synchronize #3662
  • Jul 01 12:15
    jpechane synchronize #3662
  • Jul 01 11:25
    Naros opened #3677
  • Jul 01 10:50
    MartinMedek synchronize #3545
  • Jul 01 10:46
    MartinMedek synchronize #3545
Jiri Pechanec
@jpechane
@zxxz Debezium Community migrated to https://debezium.zulipchat.com/
1 reply
@zxxz Debezium Community migrated to https://debezium.zulipchat.com/
Phil
@philipsamuel
Hey @jpechane - is there a way to pass ‚ÄźDlog4j2.formatMsgNoLookups=True in kafka connect to prevent zero-day vulnerability for log4j
Jiri Pechanec
@jpechane
@philipsamuel Debezium Community migrated to https://debezium.zulipchat.com/
Gunnar Morling
@gunnarmorling
Jiri Pechanec
@jpechane
@gunnarmorling Debezium Community migrated to https://debezium.zulipchat.com/
Gunnar Morling
@gunnarmorling
Kafka Connect does not use log4j 2.x
Jiri Pechanec
@jpechane
@gunnarmorling Debezium Community migrated to https://debezium.zulipchat.com/
RadioGuy
@RadioGuy
Is debezium-postgres 1.4.x impacted by CVE-2021 security vulnerability with log4j ?
Also does it use jms appender by any chance ?
Jiri Pechanec
@jpechane
@RadioGuy Debezium Community migrated to https://debezium.zulipchat.com/
RadioGuy
@RadioGuy
Is debezium-postgres 1.4.x impacted by CVE-2021 security vulnerability with log4j ?
Also does it use jms appender by any chance ?
Jiri Pechanec
@jpechane
@RadioGuy Debezium Community migrated to https://debezium.zulipchat.com/
hkokay
@h_kokay_twitter
@jpechane I'm trying to execute an ad-hoc snapshot. I inserted a record into the signal table but the snapshot does not get triggered. I'm using Postgres RDS as the source.
GitHubZhangH
@GitHubZhangH
image.png
cuongtl1992
@cuongtl1992
Hi Everyone
I got problem when use debezium 1.6.1.Final. some time a get status task running but it not working, after i try restart kafka-connect it working again, then i check log file but i not get any error log
i use debezium with EventOutbox routing ?. some one get same problem? .
Vladislav Borisov
@Sherybedrock_twitter
@Sherybedrock_twitter
Hello :) how you are dealing with parsing exception on ddl
database.history.skip.unparsable.dd : true
is hot fix
vaibhav pandey
@vaibhavpandey2000
@jpechane could you help me to setup mysql debezium connector i have try 2 dayes but i am unable to do this.
vaibhav pandey
@vaibhavpandey2000
when i start my connector it takes snapshot of the table but after snapshot it through error like Error during binlog processing. Last offset stored = null, binlog reader near position = mysql-bin-changelog.040423/44985833 how can i resolve it?
jsraoghub
@jsraoghub

.. debezium mysql connector losing the GTID binlog position every-time after restart. Wondering is there fix available for this ?

"MySQL current GTID set 03e063ff-0fd2-11ec-b44a-42010a65001c:1-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 does contain the GTID set required by the connector 03e063ff-0fd2-11ec-b44a-42010a65001c:168185238-171001552
Server has already purged 03e063ff-0fd2-11ec-b44a-42010a65001c:1-163307835,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 GTIDs
GTIDs known by the server but not processed yet 03e063ff-0fd2-11ec-b44a-42010a65001c:1-168185237:171001553-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644, for replication are available only 03e063ff-0fd2-11ec-b44a-42010a65001c:163307836-168185237:171001553-172385422
Some of the GTIDs needed to replicate have been already purged
Stopping down connector"

Going through the chat, the same issue is described in below post which is archived. I can not see the replies. So any help would be appreciated.
https://gitter.im/debezium/user/archives/2021/03/30

WangMinChao
@minchowang
image.png
Phuong Hai Nguyen
@nhp.0712_gitlab

Screen Shot 2022-04-25 at 2.04.46 PM.png
Target: Manually sending 7th offset with same key/content as 2nd offset to offset-topic in order to read messages from postgres as from offset 2 to 6
Reality: Connector only reads the 6th offset (the last one), then keeps going.

Can anyone tell me what I did wrong here? Here is my connector config:
{
"name": "rev_msa_mylgdb_local_1",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "localhost",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname": "msa_mylgdb",
"database.server.name": "REV_msa_mylgdb",
"table.include.list": "smartux.pt_ux_pairing",
"plugin.name": "pgoutput",
"snapshot.mode": "never",
"decimal.handling.mode": "double",
"time.precision.mode": "connect",
"binary.handling.mode": "hex",
"datatype.propagate.source.type": ".+\.BYTEA",
"slot.name": "rev_msa_mylgdb_1",
"tombstones.on.delete":"false"
}
}

Diwakar1997
@Diwakar1997

Diwakar Mishra1:17 AM
Hi All
I'm using debezium mysql connector using Amazon MSK connect, it works fine but when the connector restarts it starts giving error, "Skipping invalid database history record", ... "This is often not an issue, but if it happens repeatedly please check the 'mysql-database.db_name.table_name' topic. (io.debezium.relational.history.KafkaDatabaseHistory:306)".

And it continuously gives that error, and I need to create new connector every time after restart
Could anyone please help me find solution for this?
Thanks

HugoMRAmaro
@HugoMRAmaro

Hello,
I am using debezium-connector-postgres with Azure Postgres. However, when I try to perform the put to create the connector I always get the following error:
JdbcConnectionException: ERROR: replication slot "debezium" already exists
The problem is that before performng the POST I performed the following query: select * from pg_replication_slots;
And it was empty before the POST. So I don't understand why it created the replication slot and then complains that it already exists. Maybe has something to do with the fact that I'm using Azure Postgres. Any help?

Thank you

Jun Wan
@jwan3
Hi We just upgrade Debezium from 1.4 to 1.8. With Debezium 1.4, mongodb snapshot works fine. But with 1.8, we found that it only ingest 1-2 MB's data and then it stops ingesting data, maybe the snapshot stops? Do you guys have an idea what happens? Thank you!
Artsiom Yudovin
@ayudovin
Hi, I have such an issue when I try to registry MySQL connector javax.management.InstanceAlreadyExistsException: debezium
The version of kafka connect 6.2.4 and the versions of debezium 1.9.2, Could anyone help to find the cause?
vaibhav pandey
@vaibhavpandey2000
how we can generate ts_usec in place of ts_ms in source object anyone has any idea?
Maxim Makarov
@maxpain
Hello. Is it possible to somehow drop the "source" metadata field from messages?
ahvahsky2008
@ahvahsky2008
Hi guys, how detect when schema changed in source table?
Assaf Avissar Koren
@AssafAvissarKoren
I need to transfer from a binlog MySQL -> Kafka via debezium, any recommendations for a github repository?
Assaf Avissar Koren
@AssafAvissarKoren

we had a debzium connector failure related to "ParsingException: DDL statement couldn't be parsed"

here is the full error message:
{
"name": "prd2_cdc_ticket_v2",
"connector": {
"state": "RUNNING",
"worker_id": "papp-confluent-connect3a.42.wixprod.net:8083"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "papp-confluent-connect1b.42.wixprod.net:8083",
"trace": "io.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'wix_connect.tickets to wix_connect._tickets_del'\nmismatched input 'wix_connect' expecting {<EOF>, 'ALTER', 'ANALYZE', 'CALL', 'CHANGE', 'CHECK', 'CREATE', 'DELETE', 'DESC', 'DESCRIBE', 'DROP', 'EXPLAIN', 'GET', 'GRANT', 'INSERT', 'KILL', 'LOAD', 'LOCK', 'OPTIMIZE', 'PURGE', 'RELEASE', 'RENAME', 'REPLACE', 'RESIGNAL', 'REVOKE', 'SELECT', 'SET', 'SHOW', 'SIGNAL', 'UNLOCK', 'UPDATE', 'USE', 'BEGIN', 'BINLOG', 'CACHE', 'CHECKSUM', 'COMMIT', 'DEALLOCATE', 'DO', 'FLUSH', 'HANDLER', 'HELP', 'INSTALL', 'PREPARE', 'REPAIR', 'RESET', 'ROLLBACK', 'SAVEPOINT', 'START', 'STOP', 'TRUNCATE', 'UNINSTALL', 'XA', 'EXECUTE', 'SHUTDOWN', '--', '(', ';'}\n\tat io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:43)\n\tat org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)\n\tat org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportInputMismatch(DefaultErrorStrategy.java:327)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:139)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:905)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:72)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:45)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.lambda$recover$1(AbstractDatabaseHistory.java:134)\n\tat io.debezium.relational.history.KafkaDatabaseHistory.recoverRecords(KafkaDatabaseHistory.java:307)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.recover(AbstractDatabaseHistory.java:101)\n\tat io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:49)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.validateAndLoadDatabaseHistory(MySqlConnectorTask.java:311)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:96)\n\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: org.antlr.v4.runtime.InputMismatchException\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:880)\n\t... 18 more\n"
}
],
"type": "source"
}

eventually we had to recreate the connector with a new name and skip snapshot phase (huge table). we ended up loosing about 2h of binlog records.

what are the options we have to recover the lost timeframe. the binlog messages are still in the binlog, as binlog retention is 3 months

however, we only need to load the lost timeframe

any advise would be much appreciated

n0012
@n0012
Hi all, I'm interested to deploy the following topology:
1) debezium-server to cdc Postgres to Google Pub/Sub
2) debezium-connect to read Google Pub/Sub and write to MySQL
i have #1 up and running but am stuck on #2
I'm following this topology/example but it involves running Kafka which I'd like to avoid
Please let me know if I can publish the messages through Google Pub/Sub and support this use case - thanks
rahul kumar
@rahulkumardotrasto
Hi All, I will be integrating aws docdb with debezium. Aws docdb does not support oplog. Since debezium uses oplog doc latest timestamp during snapshot, has debezium handled this case or can i configure debezium not to use any operation related to oplog.
Butter Ngo
@butterngo
hi all, i'm newbie with Debezium, and now i'm getting a problem http://localhost:9021/api/schema-registry/8c2f8da2941df04bbb66301a6a9243a4f4768312/subjects/fullfillment.dbo.TestTable-value/versions rror_code":40401,"message":"Subject 'fullfillment.dbo.TestTable-value' not found."} => if i have experience with it plz help me :)