Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 15:54
    govi20 edited #4093
  • 15:50
    govi20 edited #4093
  • 15:48
    govi20 synchronize #4093
  • 15:42
    govi20 synchronize #4093
  • 15:32
    govi20 synchronize #4093
  • Dec 02 14:48
    MartinMedek synchronize #3774
  • Dec 02 13:37
    mfvitale review_requested #4089
  • Dec 02 13:02
    cimbalek opened #4107
  • Dec 02 10:36
    vjuranek synchronize #4083
  • Dec 02 09:21
    vjuranek closed #4106
  • Dec 02 09:07
    vjuranek opened #4106
  • Dec 02 04:55
    uurl opened #4105
  • Dec 02 00:33
    vjuranek converted_to_draft #4083
  • Dec 02 00:33
    vjuranek synchronize #4083
  • Dec 01 21:04
    adasari review_requested #4075
  • Dec 01 19:07
    prburgu closed #4102
  • Dec 01 14:31
    novotnyJiri synchronize #4104
  • Dec 01 14:31
    novotnyJiri edited #4104
  • Dec 01 14:31
    novotnyJiri edited #4104
  • Dec 01 14:30
    novotnyJiri review_requested #4104
Jiri Pechanec
@jpechane
@gunnarmorling Debezium Community migrated to https://debezium.zulipchat.com/
RadioGuy
@RadioGuy
Is debezium-postgres 1.4.x impacted by CVE-2021 security vulnerability with log4j ?
Also does it use jms appender by any chance ?
Jiri Pechanec
@jpechane
@RadioGuy Debezium Community migrated to https://debezium.zulipchat.com/
RadioGuy
@RadioGuy
Is debezium-postgres 1.4.x impacted by CVE-2021 security vulnerability with log4j ?
Also does it use jms appender by any chance ?
Jiri Pechanec
@jpechane
@RadioGuy Debezium Community migrated to https://debezium.zulipchat.com/
hkokay
@h_kokay_twitter
@jpechane I'm trying to execute an ad-hoc snapshot. I inserted a record into the signal table but the snapshot does not get triggered. I'm using Postgres RDS as the source.
GitHubZhangH
@GitHubZhangH
image.png
cuongtl1992
@cuongtl1992
Hi Everyone
I got problem when use debezium 1.6.1.Final. some time a get status task running but it not working, after i try restart kafka-connect it working again, then i check log file but i not get any error log
i use debezium with EventOutbox routing ?. some one get same problem? .
Vladislav Borisov
@Sherybedrock_twitter
@Sherybedrock_twitter
Hello :) how you are dealing with parsing exception on ddl
database.history.skip.unparsable.dd : true
is hot fix
vaibhav pandey
@vaibhavpandey2000
@jpechane could you help me to setup mysql debezium connector i have try 2 dayes but i am unable to do this.
vaibhav pandey
@vaibhavpandey2000
when i start my connector it takes snapshot of the table but after snapshot it through error like Error during binlog processing. Last offset stored = null, binlog reader near position = mysql-bin-changelog.040423/44985833 how can i resolve it?
jsraoghub
@jsraoghub

.. debezium mysql connector losing the GTID binlog position every-time after restart. Wondering is there fix available for this ?

"MySQL current GTID set 03e063ff-0fd2-11ec-b44a-42010a65001c:1-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 does contain the GTID set required by the connector 03e063ff-0fd2-11ec-b44a-42010a65001c:168185238-171001552
Server has already purged 03e063ff-0fd2-11ec-b44a-42010a65001c:1-163307835,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644 GTIDs
GTIDs known by the server but not processed yet 03e063ff-0fd2-11ec-b44a-42010a65001c:1-168185237:171001553-172385422,a40da53c-bcfb-11ea-8866-42010a650002:1-246188644, for replication are available only 03e063ff-0fd2-11ec-b44a-42010a65001c:163307836-168185237:171001553-172385422
Some of the GTIDs needed to replicate have been already purged
Stopping down connector"

Going through the chat, the same issue is described in below post which is archived. I can not see the replies. So any help would be appreciated.
https://gitter.im/debezium/user/archives/2021/03/30

WangMinChao
@minchowang
image.png
Phuong Hai Nguyen
@nhp.0712_gitlab

Screen Shot 2022-04-25 at 2.04.46 PM.png
Target: Manually sending 7th offset with same key/content as 2nd offset to offset-topic in order to read messages from postgres as from offset 2 to 6
Reality: Connector only reads the 6th offset (the last one), then keeps going.

Can anyone tell me what I did wrong here? Here is my connector config:
{
"name": "rev_msa_mylgdb_local_1",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "localhost",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname": "msa_mylgdb",
"database.server.name": "REV_msa_mylgdb",
"table.include.list": "smartux.pt_ux_pairing",
"plugin.name": "pgoutput",
"snapshot.mode": "never",
"decimal.handling.mode": "double",
"time.precision.mode": "connect",
"binary.handling.mode": "hex",
"datatype.propagate.source.type": ".+\.BYTEA",
"slot.name": "rev_msa_mylgdb_1",
"tombstones.on.delete":"false"
}
}

Diwakar1997
@Diwakar1997

Diwakar Mishra1:17 AM
Hi All
I'm using debezium mysql connector using Amazon MSK connect, it works fine but when the connector restarts it starts giving error, "Skipping invalid database history record", ... "This is often not an issue, but if it happens repeatedly please check the 'mysql-database.db_name.table_name' topic. (io.debezium.relational.history.KafkaDatabaseHistory:306)".

And it continuously gives that error, and I need to create new connector every time after restart
Could anyone please help me find solution for this?
Thanks

HugoMRAmaro
@HugoMRAmaro

Hello,
I am using debezium-connector-postgres with Azure Postgres. However, when I try to perform the put to create the connector I always get the following error:
JdbcConnectionException: ERROR: replication slot "debezium" already exists
The problem is that before performng the POST I performed the following query: select * from pg_replication_slots;
And it was empty before the POST. So I don't understand why it created the replication slot and then complains that it already exists. Maybe has something to do with the fact that I'm using Azure Postgres. Any help?

Thank you

Jun Wan
@jwan3
Hi We just upgrade Debezium from 1.4 to 1.8. With Debezium 1.4, mongodb snapshot works fine. But with 1.8, we found that it only ingest 1-2 MB's data and then it stops ingesting data, maybe the snapshot stops? Do you guys have an idea what happens? Thank you!
Artsiom Yudovin
@ayudovin
Hi, I have such an issue when I try to registry MySQL connector javax.management.InstanceAlreadyExistsException: debezium
The version of kafka connect 6.2.4 and the versions of debezium 1.9.2, Could anyone help to find the cause?
vaibhav pandey
@vaibhavpandey2000
how we can generate ts_usec in place of ts_ms in source object anyone has any idea?
Maxim Makarov
@maxpain
Hello. Is it possible to somehow drop the "source" metadata field from messages?
ahvahsky2008
@ahvahsky2008
Hi guys, how detect when schema changed in source table?
Assaf Avissar Koren
@AssafAvissarKoren
I need to transfer from a binlog MySQL -> Kafka via debezium, any recommendations for a github repository?
Assaf Avissar Koren
@AssafAvissarKoren

we had a debzium connector failure related to "ParsingException: DDL statement couldn't be parsed"

here is the full error message:
{
"name": "prd2_cdc_ticket_v2",
"connector": {
"state": "RUNNING",
"worker_id": "papp-confluent-connect3a.42.wixprod.net:8083"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "papp-confluent-connect1b.42.wixprod.net:8083",
"trace": "io.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'wix_connect.tickets to wix_connect._tickets_del'\nmismatched input 'wix_connect' expecting {<EOF>, 'ALTER', 'ANALYZE', 'CALL', 'CHANGE', 'CHECK', 'CREATE', 'DELETE', 'DESC', 'DESCRIBE', 'DROP', 'EXPLAIN', 'GET', 'GRANT', 'INSERT', 'KILL', 'LOAD', 'LOCK', 'OPTIMIZE', 'PURGE', 'RELEASE', 'RENAME', 'REPLACE', 'RESIGNAL', 'REVOKE', 'SELECT', 'SET', 'SHOW', 'SIGNAL', 'UNLOCK', 'UPDATE', 'USE', 'BEGIN', 'BINLOG', 'CACHE', 'CHECKSUM', 'COMMIT', 'DEALLOCATE', 'DO', 'FLUSH', 'HANDLER', 'HELP', 'INSTALL', 'PREPARE', 'REPAIR', 'RESET', 'ROLLBACK', 'SAVEPOINT', 'START', 'STOP', 'TRUNCATE', 'UNINSTALL', 'XA', 'EXECUTE', 'SHUTDOWN', '--', '(', ';'}\n\tat io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:43)\n\tat org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)\n\tat org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportInputMismatch(DefaultErrorStrategy.java:327)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:139)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:905)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:72)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:45)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.lambda$recover$1(AbstractDatabaseHistory.java:134)\n\tat io.debezium.relational.history.KafkaDatabaseHistory.recoverRecords(KafkaDatabaseHistory.java:307)\n\tat io.debezium.relational.history.AbstractDatabaseHistory.recover(AbstractDatabaseHistory.java:101)\n\tat io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:49)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.validateAndLoadDatabaseHistory(MySqlConnectorTask.java:311)\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:96)\n\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: org.antlr.v4.runtime.InputMismatchException\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:270)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:880)\n\t... 18 more\n"
}
],
"type": "source"
}

eventually we had to recreate the connector with a new name and skip snapshot phase (huge table). we ended up loosing about 2h of binlog records.

what are the options we have to recover the lost timeframe. the binlog messages are still in the binlog, as binlog retention is 3 months

however, we only need to load the lost timeframe

any advise would be much appreciated

n0012
@n0012
Hi all, I'm interested to deploy the following topology:
1) debezium-server to cdc Postgres to Google Pub/Sub
2) debezium-connect to read Google Pub/Sub and write to MySQL
i have #1 up and running but am stuck on #2
I'm following this topology/example but it involves running Kafka which I'd like to avoid
Please let me know if I can publish the messages through Google Pub/Sub and support this use case - thanks
rahul kumar
@rahulkumardotrasto
Hi All, I will be integrating aws docdb with debezium. Aws docdb does not support oplog. Since debezium uses oplog doc latest timestamp during snapshot, has debezium handled this case or can i configure debezium not to use any operation related to oplog.
Butter Ngo
@butterngo
hi all, i'm newbie with Debezium, and now i'm getting a problem http://localhost:9021/api/schema-registry/8c2f8da2941df04bbb66301a6a9243a4f4768312/subjects/fullfillment.dbo.TestTable-value/versions rror_code":40401,"message":"Subject 'fullfillment.dbo.TestTable-value' not found."} => if i have experience with it plz help me :)
taquanghung1705199
@taquanghung1705199
Hello everyone. I want to use avro serialize on msk connect (AWS). Do u know it ? Plz help me !
Seunghyun Lee
@isbee

I've been using SMT with io.debezium.connector.mysql.MySqlConnector(debezium/debezium-connector-mysql:1.9.3) and I'm facing a issue that tombstone's are not generated. If I use just unwrap with io.debezium.transforms.ExtractNewRecordState, "transforms.unwrap.drop.tombstones": "false" then tombstone is generated on DELETE.

But If I use insertKey, extractKeyadditionally, tombstone is not generated.

Harikumar9412-R
@Harikumar9412-R

I'm using Debezium 1.9.5 connector for mysql CDC. While trying to create the topic using the connector, getting the below error.

"
Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)
org.apache.kafka.connect.errors.ConnectException: Creation of database history topic failed, please create the topic manually
org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
"
All my kafka cluster is SASL_SSL enabled and ACL also done.

Please help me to resolve this issue.

Creation of database history topic failed, please create the topic manually
livolleyball
@livolleyball
2022-07-29 11:13:16.604 [debezium-oracleconnector-oracle_logminer-change-event-source-coordinator] ERROR io.debezium.connector.oracle.logminer.LogMinerHelper LogMinerHelper.java:111 - Mining session stopped due to the {}
java.sql.SQLException: ORA-44609: CONTINOUS_MINE 已经不支持用于 DBMS_LOGMNR.START_LOGMNR。
ORA-06512: 在 "SYS.DBMS_LOGMNR", line 72
ORA-06512: 在 line 1
1 reply
omar korbi
@gotktpas1:matrix.org
[m]
Hello everyone, i would like to synchronize data from elasticsearch to postgresql but i can t find any exemple with debezium? any suggestion
kr1929uti
@kr1929uti
Hi everyone, I have a kafka setup with postgres as my source and sink. I am trying to implement a scenario where any ddl changes in postgres source connector (such as column addition, deletion, update on column, column type change) should reflect in my sink postgres table. I already have auto.evolve=true in my sink connector configuration (using JDBC sink connector), but it is not fulfilling the requirements. Any suggestion on this?
kr1929uti
@kr1929uti
Hi all,
Whenever I encounter schema changes (ddl changes coming in), I want to automate the table backup and table deletion and then table creation with the new schema. (I am using postgres db and I have kafka setup). Any suggestions on how to go about automation?
Tri16
@Tri16
Hi all,
Currently, I have used the Confluent Debezium CDC Postgres Connector, but I get the below error when I tried to create two connectors to two database at same postgres host server.
Error:
"Failed
Only one instance of the PostgreSQL CDC Source connector can be run per database host at a time."
Any suggestions on this?
Tanay Karmarkar
@_codeplumber_twitter
Hello all,
Getting a really slow performance on the incremental snapshot with Debezium. I am publishing it to a topic of 3 partitions with a chunk size of 10000. The performance I am getting is close to 85 events per second! I am using avro serialization and de serialization. Should I try increasing batch size even further or increasing partitioning! Every couple of seconds, I see 2048 events flushed but rest of the time it’s mostly flushing 0 outstanding messages.
Sorry, forgot to mention, I am using the postgres connector.
Gunnar Morling
@gunnarmorling
Hey all, just a reminder that this room is not used any longer. Please join the Debezium community on Zulip (https://debezium.zulipchat.com). If there's any links out there pointing to Gitter rather than Zulip, please let us know (on Zulip ;), so we can try and get those fixed.
wongster80
@wongster80

Hi Can someone know a fix to this error?

org.apache.kafka.connect.errors.ConnectException: Client requested master to start replication from position > file size; the first event 'mysql-bin.001611' at 747935113, the last event read from './mysql-bin.001611' at 4, the last byte read from './mysql-bin.001611' at 4. Error code: 1236; SQLSTATE: HY000.\n\tat io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)\n\tat io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:196)\n\tat io.debezium.connector.mysql.BinlogReader$ReaderThreadLifecycleListener.onCommunicationFailure(BinlogReader.java:1139)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:958)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:594)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:838)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: com.github.shyiko.mysql.binlog.network.ServerException: Client requested master to start replication from position > file size; the first event 'mysql-bin.001611' at 747935113, the last event read from './mysql-bin.001611' at 4, the last byte read from './mysql-bin.001611' at 4.\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:922)\n\t... 3 more\n

1 reply
Valentyn Masyakin
@vamas
Hi all, we are getting error "No implementation of Debezium engine builder was found" when creating engine in Mule Anypoint Studio with the statement: "DebeziumEngine.create(CloudEvents.class)
.using(config).notifying(this::sendRecord)
.build())", all dependencies are added to pom.exe and same piece of code works fine from regular maven app. Any ideas?
Rithesh
@MechyX
[2022-11-30 16:11:36,602] INFO [Worker clientId=connect-1, groupId=someconnectorname-connect-cluster-staging] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2124)
[2022-11-30 16:11:40,387] INFO [someconnectorname-connector|task-0]    Exported 49917 records for table 'public.sometableprefix_media' after 00:18:03.74 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:12:32,088] INFO [someconnectorname-connector|task-0]    Exported 51965 records for table 'public.sometableprefix_media' after 00:18:55.441 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:13:24,007] INFO [someconnectorname-connector|task-0]    Exported 54013 records for table 'public.sometableprefix_media' after 00:19:47.36 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:14:15,932] INFO [someconnectorname-connector|task-0]    Exported 56061 records for table 'public.sometableprefix_media' after 00:20:39.285 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:15:08,947] INFO [someconnectorname-connector|task-0]    Exported 58109 records for table 'public.sometableprefix_media' after 00:21:32.3 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:16:02,005] INFO [someconnectorname-connector|task-0]    Exported 60157 records for table 'public.sometableprefix_media' after 00:22:25.358 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:16:55,078] INFO [someconnectorname-connector|task-0]    Exported 62205 records for table 'public.sometableprefix_media' after 00:23:18.431 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:17:47,278] INFO [someconnectorname-connector|task-0]    Exported 64253 records for table 'public.sometableprefix_media' after 00:24:10.631 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:18:40,360] INFO [someconnectorname-connector|task-0]    Exported 66301 records for table 'public.sometableprefix_media' after 00:25:03.712 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:19:31,209] INFO [someconnectorname-connector|task-0]    Exported 68349 records for table 'public.sometableprefix_media' after 00:25:54.562 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:20:23,762] INFO [someconnectorname-connector|task-0]    Exported 70397 records for table 'public.sometableprefix_media' after 00:26:47.115 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:21:16,615] INFO [someconnectorname-connector|task-0]    Exported 72445 records for table 'public.sometableprefix_media' after 00:27:39.968 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:21:31,395] INFO [AdminClient clientId=adminclient-10] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient:937)
[2022-11-30 16:22:09,208] INFO [someconnectorname-connector|task-0]    Exported 74493 records for table 'public.sometableprefix_media' after 00:28:32.56 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:23:02,310] INFO [someconnectorname-connector|task-0]    Exported 76541 records for table 'public.sometableprefix_media' after 00:29:25.663 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:23:54,821] INFO [someconnectorname-connector|task-0]    Exported 78589 records for table 'public.sometableprefix_media' after 00:30:18.174 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:24:47,313] INFO [someconnectorname-connector|task-0]    Exported 80637 records for table 'public.sometableprefix_media' after 00:31:10.666 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
[2022-11-30 16:25:39,899] INFO [someconnectorname-connector|task-0]    Exported 82685 records for table 'public.sometableprefix_media' after 00:32:03.252 (io.debezium.relational.RelationalSnapshotChangeEventSource:398)
Hey, The snapshot process is saying Node 0 disconnected after exporting rows everytime. Is this something to be worried about? Why does this happen? Help is much appreciated! (Postgres)
Rithesh
@MechyX

It outputs

[2022-11-30 16:44:57,890] INFO [somename-connector|task-0] 100352 records sent during previous 00:42:45.98, last recorded offset of {server=somename} partition is {last_snapshot_record=false, lsn=60485533101224, txId=330785841, ts_usec=1669821670920703, snapshot=true} (io.debezium.connector.common.BaseSourceTask:188)

after a while....