Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    first-better
    @first-better
    Hi, I got this WARN:
    WARN No maximum LSN recorded in the database; please ensure that the DB2 Agent is running (io.debezium.connector.db2.Db2StreamingChangeEventSource:122)
    I am sure that db2 is running. And I only get snapshot, the further CDC captures don't emit events
    hanoisteve
    @hanoisteve
    Hey what causes this error? org.apache.kafka.connect.errors.ConnectException: Encountered change event for table mydatabase.outbox_event whose schema isn't known to this connector
    Jiri Pechanec
    @jpechane
    @hanoisteve Check your database history, usually it has small retention or was cleaned
    Jiri Pechanec
    @jpechane
    @first-better Coiuld you please check the pre-requisities - https://debezium.io/documentation/reference/1.3/connectors/db2.html#db2-overview
    1 reply
    danieljheetch
    @danieljheetch

    Can anyone explain to me the exported debezium snapshot mode for the Postgres connector?

    The docs say: The connector performs a database snapshot based on the point in time when the replication slot was created. This mode is an excellent way to perform a snapshot in a lock-free way.

    What is the exact condition for performing a snapshot? Is it like this:

    • If the replication slot creation time is ~Now, then perform a snapshot
    • If the replication slot creation time is 8 months ago, don't perform a snapshot
    Chris Cranford
    @Naros
    @danieljheetch The condition for doing the snapshot is based on whether there are any offsets present.
    If there are offsets, we read them and continue streaming from those offsets.
    If there are no offsets, we perform the snapshot (unless you have configured the connector to never do a snapshot).
    An exported snapshot is basically a reference point in time where we can perform queries against the monitored tables and get the data from those tables as it was consistent at a given point in time without actually needing to apply table locks to guarantee the same level of consistency.
    danieljheetch
    @danieljheetch
    @Naros Thanks Chris, that explains it :)
    Adriano W Almeida
    @AdrianoW
    Hi Folks. We have set connector to read information from a PostgreSQL database. The Debezium seems to start ok (do all the 7 steps) and it streams new changes from the DB, but not the current information. We have tried different snapshot.mode but still, no success. Does any one had a problem like this before? Is it something we are missing here?
    {
      "name": "rds-debezium-db-source",
      "config": {
        "connector.class": "PostgresConnector",
        "heartbeat.interval.ms": "5000",
        "heartbeat.action.query": "INSERT INTO heartbeat (ts) VALUES (NOW())",
        "database.hostname": "database",
        "database.port": 5432,
        "database.user": "debezium",
        "database.password": "the pass",
        "database.dbname": "db",
        "database.server.name": "my_server_name",
        "table.whitelist": "",
        "plugin.name": "pgoutput",
        "time.precision.mode": "connect",
        "schema.refresh.mode": "columns_diff_exclude_unchanged_toast"
      }
    }
    5 replies
    java知识仓库
    @jonlen2012
    Mongodb is used as the source. After updating the document, all document contents need to be output from Kafka. Which parameter should be configured
    Chris Cranford
    @Naros
    @jonlen2012 MongoDB doesn't provide Debezium will a full document snapshot during updates; instead what we get is a patch that contains the fields that were removed and those which were added or updated. If you require a full document snapshot, you'll need to re-construct that yourself.
    John Martin
    @johnjmartin

    I have a few questions, feel free to direct me to another place to ask them if this isn't suitable.

    I'm running 200+ dbz mysql connectors in a production environment and I have questions about the safety of changing snapshot.mode on established, long-running connectors.

    Based off reading this thread, Gunnar suggests that one way to add new tables to an existing connector is by shutting down the connector, purging the existing dbhistory topic, and recreating the connector in schema_only_recovery mode.

    Assuming database.history.store.only.monitored.tables.ddl: true, is there anything preventing me from moving an existing connector from snapshot.mode: when_needed, to snapshot.mode: schema_only (and then updating the table.include.list)? I've tested that this works, but I want to be aware of any known edge-cases.

    Also, if I wanted to reliably add tables to an existing connector that was started in snapshot.mode: when_needed, would it be safe to move it to schema_only, update the table.include.list, and then convert back to when_needed while sending a tombstone message to the connectors offset? I'm aware this would produce duplicates for the already snapshotted tables.

    4 replies
    Gregorio
    @gcordova0309
    hi there, I have a doubt about how debezium mysql connector performs the snapshot, in the documentation I read this: Scans the database tables and generates CREATE events on the relevant table-specific Kafka topics for each row., this means that the events of initial snapshot always have "op" parameter with value "c"? There is a way configure connector for change this value to "r" as Postgres connector doing? thanks! @jpechane can you please help me?
    13 replies
    Vayuj Rajan
    @vyj7
    Hi all, I have a doubt regarding monitoring Debezium. I am using embedded debezium in my spring-boot application , Can I monitor it since, in docs it only talks about the debzium which has different components.
    Can embedded debezium gives JMX metrics which can be then exported to prometheus and graphana?
    6 replies
    java知识仓库
    @jonlen2012
    @Naros How can I get and follow this patch
    1 reply
    hanoisteve
    @hanoisteve
    @jpechane Ok, should it be a fatal error and if so what kind of retention should it have?
    1 reply
    lyq1198853167
    @lyq1198853167
    when i run siddhi oracle cdc , get warning : After applying blacklist/whitelist filters there are no tables to monitor, i need help friends!!!!
    61 replies
    RVS Satyaditya
    @devopsception
    Has any one been able to successfully deploy https://github.com/GoogleCloudPlatform/DataflowTemplates/tree/master/v2/cdc-parent For some reason the connector is not forwarding the database changes. Please help here I'm quite new to this.
    Chanh Le
    @giaosudau
    Hey everyone,
    I have requirement to encrypt/hash data for sensitive field when reading binlog from mysql?
    Is there a way to do that with Debezium?
    Thank you.
    3 replies
    Dries Wambacq
    @DriesWambacq
    Hello, I've been using Debezium to connect SQL server to kafka. Is there any way to get rid of the before/after schema for change events? In the documentation it keeps mentioning the before is optional, but I never see where you can configure it to be removed.
    Thanks
    1 reply
    vicky
    @vkvicky3_twitter
    For every update patch (change log) is there a way to enforce capturing one more KV like _id along with payload in case of debezium mongo connector ??
    Reid Thompson
    @jreidthompson_gitlab
    Hi - i have a question regarding the effect of a DB upgrade. We are currently running an old version of debezium 0.8 wit
    h PostgreSQL 9.6. We plan to upgrade PostgreSQL from 9.6 to 11.x. Will debezium be able to 'just continue from where i
    t was' after the DB upgrade, or will the upgrade break something?
    Chris Cranford
    @Naros
    @jreidthompson_gitlab I'm not an expert at PostgreSQL upgrades but as long as the replication slot remains and the WAL is retained, it should be as simple as stop the connector, perform the upgrade, and restart the connector.
    Benjamin Kastelic
    @osbeorn

    Hey, anyone else having issues with Debezium and Oracle using LogMiner? I can't get it to work ... I have whitelisted the tables but none are detected. Also, I'm getting the following error:

    2020-11-19 12:39:17,879 ERROR  ||  Cannot parse statement : update "C##LOGMINER"."LOG_MINING_FLUSH" set "LAST_SCN" = '3104972' where "LAST_SCN" = '3104953';, transaction: 10000400A3020000, due to the Trying to parse a table 'ORCLPDB1.C##LOGMINER.LOG_MINING_FLUSH', which does not exist.   [io.debezium.connector.oracle.jsqlparser.SimpleDmlParser]

    EDIT: I'm using the latest debezium/connect image (nightly)

    ruettere
    @ruettere
    Hi, is it also possible to use transformations (SMT) in oracle connectors?
    1 reply
    Chris Cranford
    @Naros
    @osbeorn It looks like you may have set database.schema as c##logminer and you shouldn't do that.
    You do not want to pull any changes from the logminer tablespace, its purely there for bookeeping only.
    Jeff Frost
    @jfrost
    We're using debezium for CDC from Aurora PostgreSQL to Snowflake. I'm wondering what folks have done to insure queries they're running on the destination are transactionally consistent with the original host system. That is: If in a transaction, I update foo and bar, how can I know that both updates are on the destination so that I can query them. I realize we have access to xmin but I'm not sure what we would compare it to in order to know all the data from the transaction has made it over.
    1 reply
    Chris Cranford
    @Naros
    Normally you would have some application or user schema where your monitored tables reside and you would set database.schema to that tablespace/schema instead.
    Benjamin Kastelic
    @osbeorn
    @Naros you are correct, i have database.schemaset. But when I remove it I get this error:
    2020-11-19 20:42:37,645 ERROR  ||  The 'database.schema' value is invalid: The 'database.schema' be provided when using the LogMiner connection adapter   [io.debezium.connector.common.BaseSourceTask]
    @Naros nevermind, I set the database.schema to my specific user schema and now it works. Thanks!
    samyujialiu
    @samyujialiu
    Hi guys, I just tried the latest debezium oracle 1.4 alpha2 jar. Still meet the same issue. Can anyone give us some ideas?
    connect | [2020-11-19 20:43:32,311] INFO [Producer clientId=connector-producer-tyest-0] Cluster ID: eML7TF9hQXqjThKwdAr20g (org.apache.kafka.clients.Metadata)
    connect | [2020-11-19 20:43:32,348] WARN Using configuration property "table.whitelist" is deprecated and will be removed in future versions. Please use "table.include.list" instead. (io.debezium.config.Configuration)
    connect | [2020-11-19 20:43:32,348] WARN Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. (io.debezium.config.Configuration)
    connect | [2020-11-19 20:43:32,348] WARN Using configuration property "table.whitelist" is deprecated and will be removed in future versions. Please use "table.include.list" instead. (io.debezium.config.Configuration)
    connect | [2020-11-19 20:43:32,348] WARN Using configuration property "table.blacklist" is deprecated and will be removed in future versions. Please use "table.exclude.list" instead. (io.debezium.config.Configuration)
    connect | [2020-11-19 20:43:32,354] ERROR The 'database.schema' value is invalid: The 'database.schema' be provided when using the LogMiner connection adapter (io.debezium.connector.common.BaseSourceTask)
    connect | [2020-11-19 20:43:32,354] INFO WorkerSourceTask{id=tyest-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
    connect | [2020-11-19 20:43:32,354] INFO WorkerSourceTask{id=tyest-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
    connect | [2020-11-19 20:43:32,355] ERROR WorkerSourceTask{id=tyest-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
    connect | org.apache.kafka.connect.errors.ConnectException: Error configuring an instance of OracleConnectorTask; check the logs for details
    connect | at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:96)
    connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:213)
    connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
    connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
    connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    connect | at java.lang.Thread.run(Thread.java:748)
    connect | [2020-11-19 20:43:32,357] ERROR WorkerSourceTask{id=tyest-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
    connect | [2020-11-19 20:43:32,357] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask)
    connect | [2020-11-19 20:43:32,360] WARN Could not stop task (org.apache.kafka.connect.runtime.WorkerSourceTask)
    connect | java.lang.NullPointerException
    connect | at io.debezium.connector.oracle.OracleConnectorTask.doStop(OracleConnectorTask.java:130)
    connect | at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:206)
    connect | at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:176)
    connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.tryStop(WorkerSourceTask.java:201)
    connect | at org.apache.kafka.connect.runtime.WorkerSourceTask.close(WorkerSourceTask.java:159)
    connect | at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:163)
    connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:190)
    connect | at org.apache.kafka.connect.runtime.Worke
    Our connector configuration: {
    "name": "oracle-connector",
    "config": {
    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
    "tasks.max" : "1",
    "database.server.name" : "server1",
    "database.hostname" : "172.25.0.3",
    "database.port" : "1521",
    "database.user" : "c##dbzuser",
    "database.password" : "dbz",
    "database.dbname" : "newcdb.localdomain",
    "database.pdb.name" : "newpdb1.localdomain",
    "database.out.server.name" : "dbzxout",
    "database.history.kafka.bootstrap.servers" : "broker:9092",
    "database.history.kafka.topic": "schema-changes.inventory",
    "database.connection.adapter": "logminer"
    }
    }
    Benjamin Kastelic
    @osbeorn
    @samyujialiu seems you're having a similar issue as me. Just set the database.schema to a valid application or user schema that contains your tables.
    samyujialiu
    @samyujialiu
    @osbeorn Thanks bro, can you pls tell me how to do that ? Through the kafka connect or ? I add the table.white.list as debezium.customer(schema.table). It still does not work...
    Chris Cranford
    @Naros
    @samyujialiu Lets assume for sake of an example the tables you want to monitor are in myschema. You just need to update your connector's configuration to include one additional configuration option called "database.schema": "myschema".
    shab12br
    @sha12br

    Hi Guys, I downloaded the examples https://github.com/debezium/debezium-examples/tree/master/kinesis and changed the pom.xml accordingly with my aws region and mysql database credentials, while mvn exec:java ; am getting the following error could someone please help me
    [WARNING]
    java.lang.ClassNotFoundException: io.debezium.examples.kinesis.ChangeDataSender
    at java.net.URLClassLoader.findClass (URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass (ClassLoader.java:418)
    at java.lang.ClassLoader.loadClass (ClassLoader.java:351)
    at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:270)
    at java.lang.Thread.run (Thread.java:748)
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 23.489 s
    [INFO] Finished at: 2020-11-20T04:15:45Z
    [INFO] ------------------------------------------------------------------------
    [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:java (default-cli) on project kinesis: An exception occured while executing the Java class. io.debezium.examples.kinesis.ChangeDataSender -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

    This is my maven version
    Apache Maven 3.6.0
    Maven home: /usr/share/maven
    Java version: 1.8.0_275, vendor: Private Build, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre
    Default locale: en, platform encoding: UTF-8
    OS name: "linux", version: "5.4.0-1025-aws", arch: "amd64", family: "unix"

    Jiri Pechanec
    @jpechane
    @sha12br Hi, did you do mvn clean install first?
    Benjamin Kastelic
    @osbeorn

    Hey! I'm using the latest Debezium version with Oracle connector and it seems that the following properties don't do anything:

    "key.converter.schemas.enable": false,
    "value.converter.schemas.enable": false

    Even with values set to false, schemas are still part of the messages.

    5 replies
    sandeepbabylon
    @sandeepbabylon
    Hello Guys, In case of MySql Connectors when I change the snapshot.mode to when_needed from schema_only. why does it take the snapshot of the whole data? .
    Jiri Pechanec
    @jpechane
    @sandeepbabylon Could you share the log? It might contain an explanation why it was triggered
    java知识仓库
    @jonlen2012
    image.png
    Exception when kafka-Connect is started,why?
    Martin Sillence
    @msillence
    Hi, I am testing out using debezium to replicate a database and I realise it would be useful to also to create a ksql stream - is it possible to have both? All the examples either use avro to replicate or the unwrap to create a ksql stream
    2 replies
    sandeepbabylon
    @sandeepbabylon
    Screenshot 2020-11-20 at 10.18.23.png
    @jpechane All I found this, But I can try to recreate what happens if this does not help.
    Jiri Pechanec
    @jpechane
    @jonlen2012 This is warning only and could be ignored, you need to fix your logging settings, see https://issues.apache.org/jira/browse/KAFKA-5229
    sandeepbabylon
    @sandeepbabylon
    Seems like when I updated the connector to use when_needed it shutdown and restarted and found the binlog pos but then went and took the snapshot
    Jiri Pechanec
    @jpechane
    @sandeepbabylon I need to see it from the start
    @sandeepbabylon Yes, just look at the offset, it is snapshot=true whcih means the snapshot was still in progress when connector went down