Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Song Gao
    @sgao103
    I'm having a very strange issue with the mysql connector on MariaDB. I have it set up to track 3 tables. Events for two of them come through fine. However, I'm only getting create events for the last, it doesn't seem to be picking up updates. Has anyone come across this?
    1 reply
    lovelyqincai
    @lovelyqincai

    Hello, I have a mysql table which contains timestamp type, here is the data:
    2019-10-29 13:35:44
    2019-10-29 13:35:44
    2019-10-29 13:35:45
    2019-11-14 17:03:35
    2019-12-30 16:23:21
    2019-12-30 16:23:22
    2019-12-30 16:23:22
    2020-02-13 20:07:11
    2020-06-03 19:54:59

    then debezium pull data from kafka is :
    2019-10-29T18:35:44Z
    2019-10-29T18:35:44Z
    2019-10-29T18:35:45Z
    2019-11-14T23:03:35Z
    2019-12-30T22:23:21Z
    2019-12-30T22:23:22Z
    2019-12-30T22:23:22Z
    2020-02-14T02:07:11Z
    2020-06-04T00:54:59Z

    they have 5,5,5,6,6,6,6,6,5 hours offset...

    Why did this happen?????

    7 replies
    Cory Pulm
    @CoryPulm
    Hi all, I am trying to connect a Debezium 1.6 connector to a hosted Kafka cluster with Aiven.io. Connecting to a Google CloudSQL database works just fine but no matter what, I can only ever receive this error. It seems like it never actually grabs the schema initially or errors before that. I am very new to Debezium and so far have tried setting the snapshot.mode to initial, when_needed, schema_only, and schema_only_recovery. The connector IMMEDIATELY dies like this:
    2021-07-14 22:06:46,654 WARN   MySQL|asdf-db|binlog  Encountered change event 'Event{header=EventHeaderV4{timestamp=1626300406000, eventType=TABLE_MAP, serverId=1427829628, headerLength=19, dataLength=150, nextPosition=103725842, flags=0}, data=TableMapEventData{tableId=3358, database='MasterDB', table='Server', columnTypes=3, 15, 15, 15, 3, 3, -2, 18, 18, 3, 3, 3, 15, 1, 18, 15, 1, 3, -4, 3, 15, -4, 3, -2, 15, 1, 3, 3, 3, 1, 1, -2, 1, 3, 18, 18, 18, 1, 1, 3, 18, 3, 18, 1, 1, 1, 3, 18, 1, 1, 1, -2, 18, 17, 18, 15, 3, 15, -2, 3, 1, 1, 3, 1, -2, columnMetadata=0, 765, 765, 765, 0, 0, 63233, 0, 0, 0, 0, 0, 150, 0, 0, 30, 0, 0, 2, 0, 150, 2, 0, 63233, 300, 0, 0, 0, 0, 0, 0, 63233, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 63233, 0, 0, 0, 765, 0, 300, 63233, 0, 0, 0, 0, 0, 63233, columnNullability={1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64}, eventMetadata=null}}' at offset {transaction_id=null, ts_sec=1626299670, file=mysql-bin.562792, pos=103725595, gtids=15699182-9ecc-11ea-892f-42010a80024f:1-5563656820,28600ed6-351d-11eb-a6b9-42010a800363:1-10462706059,2c346a7c-2d2b-11e7-a8ee-42010a80066d:1-2668086076:2668086078-2670504574,3217ab53-1264-11e7-9f89-42010a80006f:1-900215301,724e279c-1769-11e8-a380-42010a80024f:1-10744338841,94999441-d95f-11e9-8275-42010a800171:1-5786793002, server_id=1427829628, event=1} for table MasterDB.Server whose schema isn't known to this connector. One possible cause is an incomplete database history topic. Take a new snapshot in this case.
    The event will be ignored.
    2 replies
    Manjunath Adisesha
    @manjunath.adisesha:matrix.org
    [m]
    Looked at the code for FileOffsetWriter.loadOffset, expect the file is in locked state there is nothing unusual about loading the props. Has anyone able to successfully run Cassandra connector 1.6 on windows?
    Jorn Argelo
    @jornargelo
    Hi all, not sure if this came along already but we are seeing ORA-00310: archived log contains sequence 2070; sequence 2067 required on an online redo log (so not an archived log as the message suggests) which causes the connector to stop. The DBA team mentioned that the sequence for the online redo logs is reused. Is this expected behavior in this case? I happens once every few days and there doesn't seem to be a logic to it - so I suspect a timing issue.
    1 reply
    rununrath
    @rununrath
    Hi, I try to use Debezium source connector for Oracle with Kafka when I check data in my topic found e struct before and after. But I focus only after can I filtering only after to stored in my kafka topic?
    2 replies
    Manjunath Adisesha
    @manjunath.adisesha:matrix.org
    [m]
    :point_up: Edit: Looked at the code for FileOffsetWriter.loadOffset and I see that the IOException is due to a different process having a lock "The process cannot access the file because another process has locked a portion of the file". Has anyone able to successfully run Cassandra connector 1.6 on windows?
    bessmertnyip0ni
    @bessmertnyip0ni
    Hi folks, i have a quetion, how can i made HA cluster to debezium server? Where can i read this topic ?
    meenkx
    @meenkx
    image.png
    1 reply
    I don't know about aix 7.2 ibm 11.5 with this issue.
    Benjamin Kastelic
    @osbeorn

    Oracle connector stopped working after encountering a DDL statement couldn't be parsed and after a restart I get this error:

    2021-07-15 13:46:15,870 ERROR  Oracle|oracle-db|streaming  Mining session stopped due to the {}   [io.debezium.connector.oracle.logminer.LogMinerHelper]
    java.sql.SQLException: ORA-01284: file /Archlog/1_114441_1011059409.dbf cannot be opened
    ORA-00308: cannot open archived log '/Archlog/1_114441_1011059409.dbf'
    ORA-27037: unable to obtain file status
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    Additional information: 3
    ORA-06512: at "SYS.DBMS_LOGMNR", line 68
    ORA-06512: at line 1

    What could this mean?

    The connector was stopped for a couple of days before I noticed ...

    7 replies
    amr!t jang!d
    @amritam_iiita_twitter
    Facing one issue where collection.include.list doesn't seem to work at all, connector still consumes all the oplogs from mongodb.
    If collection.include.list = MyDB[.]myCollection, Then kafka topic say <mytopic-prefix>.myCollection will have oplogs from all the collection of MongoDB.
    NOTE - if configured with database.include.list its works fine.
    Using debezium-mongo connector 1.5 with MongoDB shared cluster with version 3.6.
    1 reply
    rununrath
    @rununrath
    Hi, Within Debezium source connector for Oracle in Kafka then Can I added Message Filtering to filter data only last 1 hr store in Kafka from timestamp field? It's possible or not?
    3 replies
    Brendan O'Brien
    @BPOB_twitter

    Hi,

    Please forgive yet another silly question ā€¦.

    I was expecting all mysql binary log entries to have an associated gtid and payload.value.op to have values of 'c','u' and 'd' only.

    However I am receiving records with payload.value.op of 'r' with no gtid.

    What do these represent and what configuration settings control the generation of those?

    Many Thanks

    10 replies
    Ignacy Janiszewski
    @ijaniszewski
    sorry about this question, but what is the difference betweenschema change topic and schema history topic ?
    1 reply
    Kadija K
    @kadkorom_twitter
    This message was deleted
    1 reply
    buntyk91
    @buntyk91
    Hi All, I am trying to delete the event but I am not seeing any activity in kafka nor in debezium logs.I am using the below connector config.Can somebody help. apiVersion: "kafka.strimzi.io/v1beta2"
    kind: "KafkaConnector"
    metadata:
    name: "inventory-connector"
    namespace: abc
    labels:
    strimzi.io/cluster: cluster-connect
    spec:
    class: io.debezium.connector.mysql.MySqlConnector
    tasksMax: 1
    config:
    database.hostname: 10.0.0.0
    database.port: "3306"
    database.user: "mysqluser"
    database.password: "**"
    database.server.id: "12345678"
    database.server.name: "ABC"
    database.whitelist: "inventory,integrationtesting"
    table.include.list: "inventory.*,integrationtesting.dummy"
    database.history.kafka.bootstrap.servers: "bootstrap:9092"
    database.history.kafka.topic: "schema-changes.inventory"
    include.schema.changes: "true"
    key.converter.schema.registry.url: "http://10.0.0.0:8081"
    value.converter.schema.registry.url: "http://10.0.0.0:8081"
    key.converter: "io.confluent.connect.avro.AvroConverter"
    value.converter: "io.confluent.connect.avro.AvroConverter"
    transforms: "unwrap"
    transforms.unwrap.type: "io.debezium.transforms.ExtractNewRecordState"
    transforms.unwrap.add.fields: "op,ts_ms,source.ts_ms,source.connector,source.snapshot"
    5 replies
    rununrath
    @rununrath
    I test between JDBC source & Debezium source connector and I made changes in database with INSERT but I notice output from Debezium has delay longer than JDBC source connector. JDBC is near real time but Debezium has delay > minute.
    So why be that? And how to tune Debezium for faster?
    9 replies
    ant0nk
    @ant0nk
    Hello, Kafka Connect stops all of it's connectors with "OutOfMemoryError: Java heap space" some time after warns like this: [2021-07-18 18:44:27,241] WARN [Worker clientId=connect-1, groupId=connect-cluster] Didn't reach end of config log quickly enough (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1090) java.util.concurrent.TimeoutException: Timed out waiting for future at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:106) at org.apache.kafka.connect.storage.KafkaConfigBackingStore.refresh(KafkaConfigBackingStore.java:412) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.readConfigToEnd(DistributedHerder.java:1083) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:1039) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:317) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:289) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2021-07-18 18:44:27,243] INFO [Worker clientId=connect-1, groupId=connect-cluster] Member connect-1-b898f559-09d1-4e41-9320-c239751495c4 sending LeaveGroup request to coordinator ****:6667 (id: 2147482646 rack: null) due to taking too long to read the log (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:979)
    It seems that after those warning Oracle connector was unable to send messages to Kafka and therefore buffer started to grow and leaded to OOME. How to prevent it?
    13 replies
    meenkx
    @meenkx
    I don't know about this issue
    .
    [db2inst1@ /asncdctools/src]#/opt/ibm/db2/V11.5/samples/c/bldrtn asncdc
    ld: 0706-004 Cannot find or read export file: asncdc.exp
    ld:accessx(): A file or directory in the path name does not exist.
    .
    This is AIX 7.2 DB2 11.5
    .
    please help me.
    2 replies
    Hoseung Lee
    @astrohsy

    Hello, I encountered an odd error while processing Binlog from MySQL.
    I am using the 1.6 version along with MySQL version of 5.7.28.

    org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
        at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:130)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
    Caused by: io.debezium.DebeziumException: java.lang.RuntimeException: Invalid length when read MySQL TIME value. BIN_LEN_TIME is 0
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:78)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:113)
        ... 5 more
    Caused by: java.lang.RuntimeException: Invalid length when read MySQL TIME value. BIN_LEN_TIME is 0
        at io.debezium.connector.mysql.MysqlBinaryProtocolFieldReader.readTimeField(MysqlBinaryProtocolFieldReader.java:38)
        at io.debezium.connector.mysql.AbstractMysqlFieldReader.readField(AbstractMysqlFieldReader.java:30)
        at io.debezium.connector.mysql.MySqlConnection.getColumnValue(MySqlConnection.java:571)
        at io.debezium.jdbc.JdbcConnection.rowToArray(JdbcConnection.java:1470)
        at io.debezium.relational.RelationalSnapshotChangeEventSource.createDataEventsForTable(RelationalSnapshotChangeEventSource.java:355)
        at io.debezium.relational.RelationalSnapshotChangeEventSource.createDataEvents(RelationalSnapshotChangeEventSource.java:306)
        at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:136)
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:69)
        ... 6 more

    The odd thing is when I checked the length of the TIME type field which is the source of the error. It returns [8, NULL], not 0.

    MySQL [******]> select distinct(length(work_on_time)) from commute_report;
    +------------------------+
    | (length(work_on_time)) |
    +------------------------+
    |                      8 |
    |                   NULL |
    +------------------------+
    
    MySQL [******]> describe commute_report;
    +-------------------+--------------+------+-----+---------+----------------+
    | Field             | Type         | Null | Key | Default | Extra          |
    +-------------------+--------------+------+-----+---------+----------------+
    | seq               | bigint(20)   | NO   | PRI | NULL    | auto_increment |
    | work_on_time      | time         | YES  |     | NULL    |                |

    according to the code below, if the value of the field is null, then it should return NULL instead of throwing errors in the following code.

    https://github.com/debezium/debezium/blob/672f9588531be5c5426cef10ee733878174afada/debezium-connector-mysql/src/main/java/io/debezium/connector/mysql/MysqlBinaryProtocolFieldReader.java#L33

    I need your help, Thanks!

    2 replies
    rununrath
    @rununrath
    Hi, Iā€™m used debezium connector with oracle database but debezium made oracle downed since archive log fulled. How to prevent archive log fulled in oracle database?
    10 replies
    Shyam
    @shyam12860
    Hello! I am looking into building a change data capture pipeline for our monolith Postgres database, using debezium. Since logical replication isn't available on standby machines, I am trying to estimate the impact of using debezium on the primary postgres instance. I am trying to estimate the impact of changing the wal_level from hot_standby to logical, and the impact of reading the wal files using pgoutput. Any benchmark tests that exist, or relevant reading someone can point me to?
    3 replies
    Barton Petersen
    @bpetersen
    How does Debezium know what avro schema it should use to write the message? I suppose it would query schema registry for the schema, but how does it do this?
    Holger Brandl
    @holgerbrandl
    Hi there, I'm just getting started with an EmbeddedEngine debezium for a Postgres database. It works when using debezium/postgres image, but fails for other postgres images such as postgres:11.12 or timescale/timescaledb:latest-pg11 with the error Creation of replication slot failed. Not sure, because with 11 I seem to match the requirements for the pgoutput mode. Is there a dedicated/hidden setup procedure to prepare a postgres instance for CDC with debezium?
    5 replies
    sandeeptecnotree
    @sandeeptecnotree
    Hi
    Failed to load offset for file snapshot_offset.properties
    I am getting this error while using connector for debzium
    RadioGuy
    @RadioGuy
    image.png
    Hi,
    I am trying to use incremental snapshot for debezium SQL server.
    I am not getting any sanphsot data in my Kaka topic, though I can see debezium updating signal table as above
    Any suggestin what I am missing here
    1 reply
    Krishna Kumar Kedia
    @krishna-dunzo
    Hi,
    I am using Mongo DB connector for debezium. For update events, can we send the complete state after update instead of patch for MongoDB? Like after in create event.
    1 reply
    dbzusrsri
    @dbzusrsri

    Hello there, I am new to Debezium. I am working on POC for Debezium Oracle connector and was able to successfully see the initial snapshot on kafka topic. However, none of the changes made after that were replicated. We see below error in the log file.

    ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
    Caused by: java.lang.RuntimeException: Failed to resolve Oracle database version

    I think above error is misleading as we were able to see connection from debezium to database and the initial snapshot information (Create DDL and rows present in table) was seen on kafka topics. I appreciate your help in this regard.

    20 replies
    mcsgroi-hv
    @mcsgroi-hv
    Hello,
    We have Embedded Debezium set up in a program. It seems to be working correctly on 1.4.2, but when we try to upgrade to 1.6.0, or even 1.5.0 for that matter, it just says no records available yet, sleeping a bit.... We have stripped down our configuration properties to the minimum for the sake of testing and there appears to be no change. It does seem to log that a change is found in the table "Relation '16793' resolved to table 'public.category'", but as soon as it gets there it just goes back to saying it can't find more records and it doesn't process anything. Please advise.
    1 reply
    Sukant Garg
    @gargsms
    Tasks for postgresql keep dying with warnings Refresh of public.pg_temp_19568 was requested but the table no longer exists and cannot load schema for table 'public.pg_temp_19568'
    Any clue how to fix these?
    2 replies
    suxm-git
    @suxm-git
    hello,Why did I delete dbhistory.dat and not delete offset.dat and throw an exception when the connector started ?
    14 replies
    rununrath
    @rununrath
    Hi, I used Debezium with Oracle but i found error.
    ERROR WorkerSourceTask{​​​​​​​​id=source-oracle-debezium10-0}​​​​​​​​ Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) [task-thread-source-oracle-debezium10-0]
    org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
    at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)
    at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:208)
    ...s.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
    Caused by: java.sql.SQLException: ORA-06550: line 1, column 39:
    PLS-00201: identifier 'DBMS_LOGMNR_D.STORE_IN_REDO_LOGS' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:628)
    at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:562)
    ...
    at io.debezium.connector.oracle.logminer.LogMinerHelper.buildDataDictionary(LogMinerHelper.java:97)
    at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.initializeRedoLogsForMining(LogMinerStreamingChangeEventSource.java:234)
    at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:137)
    ... 7 more
    5 replies
    Shlomi Lanton
    @shlomiLan

    Hey all, I'm using Debezium with Postgres, what should I expect in case of table updates:

    1. Change column type
    2. Add column
    3. Drop column
      Does Postgres re-write the entire table and I should get new messages for each row? or only when new data is inserted I will get the updated schema?

    Thanks

    3 replies
    Jeon ChangMin
    @rafael81
    Hi, Can I re-open this issue?
    https://issues.redhat.com/browse/DBZ-248
    1 reply
    Song Gao
    @sgao103
    Does Debezium work with MariaDB GTIDs? I'm assuming not, due to the different formats? Couldn't find much on this in the docs
    1 reply
    adrianriesen
    @adrianriesen

    Hi

    I'm currently using the Debezium Quarkus extension. Together with the following connector configuration:

    {
    "name": "db-connector",
    "config": {
    "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
    "tasks.max": "1",
    "database.hostname": "postgres",
    "database.port": "5432",
    "database.user": "postgres",
    "database.password": "secret",
    "database.dbname": "db",
    "database.server.name": "localhost",
    "table.include.list": "db.outboxevent",
    "schema.whitelist": "db",
    "tombstones.on.delete": "false",
    "transforms": "outbox",
    "transforms.outbox.type" : "io.debezium.transforms.outbox.EventRouter",
    "transforms.outbox.route.topic.replacement": "${routedByValue}.events",
    "transforms.outbox.table.field.event.timestamp": "timestamp"
    }
    }

    I also use the latest example/postgres version. Following yaml Code is my Docker Compose config:

    version: '2'
    services:
    zookeeper:
    container_name: zookeeper
    image: debezium/zookeeper:1.6
    ports:

      - 2181:2181
      - 2888:2888
      - 3888:3888

    kafka:
    container_name: kafka
    image: debezium/kafka:1.6
    ports:

      - 9092:9092
    links:
      - zookeeper
    environment:
      ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL

    postgres:
    container_name: postgres
    image: debezium/example-postgres:latest
    ports:

      - 5432:5432
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=secret

    connect:
    container_name: connect
    image: debezium/connect:1.6
    ports:

      - 8083:8083
    links:
      - kafka
      - postgres
    environment:
      - BOOTSTRAP_SERVERS=kafka:19092
      - GROUP_ID=1
      - CONFIG_STORAGE_TOPIC=my_connect_configs
      - OFFSET_STORAGE_TOPIC=my_connect_offsets
      - STATUS_STORAGE_TOPIC=my_connect_statuses

    My problem is that I see the outboxevents in the DB and everthing seems connected. But changes aren't published to Kafka. Does anyone spot a problem with the config?

    12 replies
    mcsgroi-hv
    @mcsgroi-hv

    Hello,
    We have Embedded Debezium set up in a program. It seems to be working correctly on 1.4.2, but when we try to upgrade to 1.6.0, or even 1.5.0 for that matter, it just says no records available yet, sleeping a bit.... We have stripped down our configuration properties to the minimum for the sake of testing and there appears to be no change. It does seem to log that a change is found in the table "Relation '16793' resolved to table 'public.category'", but as soon as it gets there it just goes back to saying it can't find more records and it doesn't process anything. Please advise.

    Hello, I wanted to update this and see if anyone had any ideas. It appears that on launch we see an exception:

    org.apache.kafka.connect.errors.RetriableException: An exception occurred in the change event producer. This connector will be restarted.
        at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:38)
        at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:153)
        at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:38)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:159)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:122)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
    Caused by: org.postgresql.util.PSQLException: Database connection failed when writing to copy
        at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:1052)
        at org.postgresql.core.v3.CopyDualImpl.flushCopy(CopyDualImpl.java:23)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.updateStatusInternal(V3PGReplicationStream.java:193)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.timeUpdateStatus(V3PGReplicationStream.java:184)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:126)
        at org.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:80)
        at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:473)
        at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.searchWalPosition(PostgresStreamingChangeEventSource.java:274)
        at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:134)
        ... 8 common frames omitted
    Caused by: java.net.SocketException: Broken pipe (Write failed)
        at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.base/java.net.SocketOutputStream.socketWrite(Unknown Source)
        at java.base/java.net.SocketOutputStream.write(Unknown Source)
        at java.base/java.io.BufferedOutputStream.flushBuffer(Unknown Source)
        at java.base/java.io.BufferedOutputStream.flush(Unknown Source)
        at org.postgresql.core.PGStream.flush(PGStream.java:554)
        at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:1050)
        ... 16 common frames omitted

    It does not restart as it claims it will. On our end, it doesn't appear to be anything blatantly wrong with the infrastructure. We believe all of the Postgres configuration information is correct as far as we can tell (cloudsql.enable_pglogical and cloudsql.logical_decoding are both set to on).

    4 replies
    Phil Rzewski
    @philrz
    Hi there. Debezium newbie here. I've run through the tutorial at https://debezium.io/documentation/reference/tutorial.html a couple times, studying the output a little closer each time. What I've noticed on my most recent pass is that at the https://debezium.io/documentation/reference/tutorial.html#viewing-create-event step, the example output in the tutorial shows "op": "c" for all the events whereas my output shows "op":"r", and the text in the tutorial describes r as being "in the case of a non-initial snapshot", so I'm not sure why I'm getting r. I saw this first when I ran through the tutorial with Docker on macOS, but then to double check before coming here, I ran through it again on a fresh Linux VM and saw the same. I'd noticed some other differences when comparing to the outputs shown in the https://debezium.io/documentation/reference/tutorial.html#watching-connector-start-up step, but the tutorial did say in spots that the output would be "like" what was shown, so it's only when I saw the op values being completely different that I got concerned. Can someone help shed light on if I'm doing something wrong, or if the tutorial is maybe just out of date? I'm happy to provide output dumps or whatever might help.
    3 replies
    revanurupavan
    @revanurupavan

    Hello,

    I'm new to Debezium and would like to know what's the best path for my scenario. I have Postgresql as my primary database and I would like to stream all the changes from it to the RabbitMq broker where the events will be handled accordingly by the consumer and will be pushed to the ElasticSearch. The web application has the database, RabbitMq and the webserver containers setup ready and is in stable state. The important point here is that I cannot add Apache Kafka as the broker just for this purpose as RabbitMq will pretty much solve my need to transport the events. However, I'm not sure what's the best way to stream the DB changes to the MQ - I was wondering if Debezium-server was the way to go, but it doesn't have the support for the RabbitMq sink (maybe create my own sink?). If not, please advice other services that might be a better fit for this use-case.
    Thanks!

    3 replies
    buntyk91
    @buntyk91

    Hi Experts, I am having trouble with the date,datetime and timestamp columns in mysql connector.I am pushing the data in the below format for the three columns.Timestamp value is correct but other dates are converting into int data type.

    ColumnType----ValuePassing-------GettinginKafka

    Date--> 2021-07-20--->18828
    datetime-->2018-09-08 17:51:04.007---->1536429064000

    Expected Result:

    Date--> 2021-07-20--->2021-07-20-
    datetime-->2018-09-08 17:51:04.007---->2018-09-08 17:51:04.007

    I should get the same data which is present in mysql into kafka, as I am making the partitions based on the date further in hive tables after consuming the data from kafka.

    Config I am using:

    apiVersion: "kafka.strimzi.io/v1beta2"
    kind: "KafkaConnector"
    metadata:
    name: "inventory-connector"
    namespace: abc
    labels:
    strimzi.io/cluster: cluster-connect
    spec:
    class: io.debezium.connector.mysql.MySqlConnector
    tasksMax: 1
    config:
    database.hostname: 10.0.0.0
    database.port: "3306"
    database.user: "mysqluser"
    database.password: "**"
    database.server.id: "12345678"
    database.server.name: "ABC"
    database.whitelist: "inventory,integrationtesting"
    table.include.list: "inventory.*,integrationtesting.dummy"
    database.history.kafka.bootstrap.servers: "bootstrap:9092"
    database.history.kafka.topic: "schema-changes.inventory"
    include.schema.changes: "true"
    key.converter.schema.registry.url: "http://10.0.0.0:8081"
    value.converter.schema.registry.url: "http://10.0.0.0:8081"
    key.converter: "io.confluent.connect.avro.AvroConverter"
    value.converter: "io.confluent.connect.avro.AvroConverter"
    transforms: "unwrap"
    transforms.unwrap.type: "io.debezium.transforms.ExtractNewRecordState"
    transforms.unwrap.add.fields: "op,ts_ms,source.ts_ms,source.connector,source.snapshot"
    transforms.unwrap.drop.tombstones: "true"
    transforms.unwrap.delete.handling.mode: "rewrite"

    5 replies
    VerA šŸ³ļøā€šŸŒˆ
    @veramone_twitter
    Hi, am I correct in seeing that, unlike the older table.whitelist, table.include.list no longer allows for regular expressions?
    2 replies
    Manjunath Adisesha
    @manjunath.adisesha:matrix.org
    [m]
    For Cassandra-connector can we configure the data folder path, instead of Cassandra-connector picking the same from cassandra.yaml file. I am using a cassandra docker image and I have defined alias for conf and data folder in the docker-compose.yaml ?
    rununrath
    @rununrath
    Hi, Why cannot use Debezium with Oracle 10g?
    If Oracle 10g had manual configured Logminor by DBA.
    3 replies
    yordanstoichkov
    @yordanstoichkov
    Hello,
    I saw you recently committed in the mysql connector table partition awareness.
    Is it expected to be released soon ?
    yonghaozhang
    @yonghaozhang
    Hi Team, I see the following problem with debezium 1.4.1.Final. Problem as following: org.apache.kafka.connect.errors.ConnectException: Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff]
    yonghaozhang
    @yonghaozhang
    image.png