Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Barton Petersen
    Anyone know why an Amazon RDS WAL fills up so fast? This database is hardly used, but given a couple days of a down connector and I get 50GB of disk space used up by the WAL.
    [2022-04-11 09:52:04,580] INFO WorkerSourceTask{id=sysp-oracle-extract-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:510)
    [2022-04-11 09:52:14,581] INFO WorkerSourceTask{id=sysp-oracle-extract-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:510)
    Exception in thread "debezium-oracleconnector-sysp-change-event-source-coordinator" io.debezium.DebeziumException: Couldn't set processed low watermark
        at io.debezium.connector.oracle.xstream.LcrEventHandler.setWatermark(LcrEventHandler.java:325)
        at io.debezium.connector.oracle.xstream.LcrEventHandler.processLCR(LcrEventHandler.java:88)
        at oracle.streams.XStreamOut.XStreamOutReceiveLCRCallbackNative(Native Method)
        at oracle.streams.XStreamOut.receiveLCRCallback(XStreamOut.java:465)
        at io.debezium.connector.oracle.xstream.XstreamStreamingChangeEventSource.execute(XstreamStreamingChangeEventSource.java:108)
        at io.debezium.connector.oracle.xstream.XstreamStreamingChangeEventSource.execute(XstreamStreamingChangeEventSource.java:43)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:172)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:139)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:108)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: oracle.streams.StreamsException: ORA-26876: invalid processed low-watermark (current position=ffffa276b0e30000000000000000ffffa276b0e3000000000000000001; new position=0003a287896100000001000000010003a2876402000000120000000101) 
        at oracle.streams.XStreamOut.XStreamOutSetProcessedLowWatermarkNative(Native Method)
        at oracle.streams.XStreamOut.setProcessedLowWatermark(XStreamOut.java:696)
        at io.debezium.connector.oracle.xstream.LcrEventHandler.setWatermark(LcrEventHandler.java:306)
        ... 13 more
    [2022-04-11 09:52:24,581] INFO WorkerSourceTask{id=sysp-oracle-extract-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:510)
    [2022-04-11 09:52:34,582] INFO WorkerSourceTask{id=sysp-oracle-extract-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:510)
    Debezium raised an exception , the status of the Debezium task was still running, but Debezium doesn't produce message any more.
    I met this error once , and I don't know how to reproduce it.
    Eviorment: Debezium 1.8.1.Final , oracle11g & xstream
    ChaeHoon Lim

    I am collecting data from SQLServer through Debezium Source Connector (v.1.4.0).
    One day, as the CDC Table was periodically locked, no more logs were loaded in the CDC.
    Due to this, it was initialized through the CDC Disabled & Enabled task,
    After that, snapshot.mode=schema_only does nothing on the Connector.
    However, the Connector's Status is Running.

    What more can I see when nothing is running?

    org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
    at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:366)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$execute$25(MySqlStreamingChangeEventSource.java:855)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1125)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:973)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:599)
    at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:857)
    at java.lang.Thread.run(Thread.java:748)
    Caused by: io.debezium.DebeziumException: Error processing binlog event
    ... 7 more
    Caused by: io.debezium.DebeziumException: org.apache.kafka.connect.errors.SchemaBuilderException: Invalid default value
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$handleQueryEvent$2(MySqlStreamingChangeEventSource.java:587)
    at io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:305)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleQueryEvent(MySqlStreamingChangeEventSource.java:582)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$execute$14(MySqlStreamingChangeEventSource.java:827)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:349)
    ... 6 more
    Caused by: org.apache.kafka.connect.errors.SchemaBuilderException: Invalid default value
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:131)
    at io.debezium.relational.TableSchemaBuilder.addField(TableSchemaBuilder.java:374)
    at io.debezium.relational.TableSchemaBuilder.lambda$create$2(TableSchemaBuilder.java:119)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
    at io.debezium.relational.TableSchemaBuilder.create(TableSchemaBuilder.java:117)
    at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:135)
    at io.debezium.connector.mysql.MySqlDatabaseSchema.lambda$applySchemaChange$2(MySqlDatabaseSchema.java:171)
    at java.lang.Iterable.forEach(Iterable.java:75)
    at io.debezium.connector.mysql.MySqlDatabaseSchema.applySchemaChange(MySqlDatabaseSchema.java:171)
    at io.debezium.pipeline.EventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(EventDispatcher.java:539)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$handleQueryEvent$2(MySqlStreamingChangeEventSource.java:584)
    ... 10 more
    Caused by: org.apache.kafka.connect.errors.DataException: Invalid Java object for schema type INT64: class java.lang.String for field: "null"
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:245)
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:213)
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:129)
    ... 28 more
    i don't know how to solve this problem,i use 1.7.0.Final,can you give me some advice?
    2 replies
    i have the same quetion with you,when i add table in oralce-connector: table.include.list. but when i change the db_history topic,and restart, it missing
    Anuraag Singh

    Hi All,

    We are facing problem while inserting mongo db records to bigQ via debezium. If there is change in sequence of fields in a json documents e.g. {"fields1": 1, "fields":{"field2": 2, "field3": "test"}} AND {"fields1": 1, "fields":{"field3": "test", "field2": 2}}
    As you can see nested object is having different sequence

    When above scenarios is happening, records are going to BigQ merge tables but are giving error while merging and inserting into final table because of change in sequence of fields.

    Please help me if you have faced this issue before.

    Hi I'm trying to hit the "pause" API endpoint to pause the connector but I'm getting a 405
    curl -s XPUT "http://<remote_host>:8083/connectors/<connector_name>/pause"
    {"error_code":405,"message":"HTTP 405 Method Not Allowed"}
    Hi all, i have an issue with postgresql connector, we have specified multiple table in table.exclude.list, and that parameter not works. when i run a topic the table still appear.. can someone pease help?
    1 reply
    Nicolas Garcia

    Hi team I had an outage with debezium that took long time to fix, and i had a problem as described in the documentation:
    Debezium needs a PostgreSQL’s WAL to be kept during Debezium outages. If your WAL retention is too small and outages too long then Debezium will not be able to recover after restart as it will miss part of the data changes. The usual indicator is an error similar to this thrown during the startup: ERROR: requested WAL segment 000000010000000000000001 has already been removed. When this happens then it is necessary to re-execute the snapshot of the database. We also recommend to set parameter wal_keep_segments = 0. Please follow PostgreSQL official documentation for fine-tuning of WAL retention.

    My question is how can I re-execute the snapshot of the database ?I have tried several options changed the "snapshot.mode" but i am always receiving the same error "SQLException: ERROR: requested WAL segment 000000010000082D0000000D has already been removed " can anyone help me please?

    1 reply
    Artsiom Yudovin
    Hi, Could someone help? I would detect that debezium finishes snapshot loading. What options do I have ?
    Hi guys, i recieve JdbcConnectionException: ERROR: permission denied
    what i doing wrong?
    vaibhav pandey
    how we can generate ts_usec in place of ts_ms in source object anyone has any idea?
    Hello everyone i am having $600 is the anyone willing to sell his or her bitcoin at cheaper rates
    Ronaldo Lanhellas

    Hello guys, I'm using Debezium Oracle v1.9, my connector is running normal but with the following status:

      "connector": {
            "state": "RUNNING",
            "worker_id": "null:-1"

    worker_id: null , is normal ?

    Hi everyone. How to overcome this error:
    com.github.shyiko.mysql.binlog.network.ServerException: Client requested master to start replication from impossible position; the first event 'mysql-bin.000001' at 43109, the last event read from 'mysql-bin.000001' at 4
    Nhat Nguyen
    hi everyone , I am trying to deploy Avro Schema Registry (using Apicurio) for serializing , but encountered this problem, could anyone help me please? Thanks a lot
    connect      | 2022-05-16 03:33:55,790 ERROR  ||  Stopping due to error   [org.apache.kafka.connect.cli.ConnectDistributed]
    connect      | org.apache.kafka.common.config.ConfigException: Invalid value io.apicurio.registry.utils.converter.AvroConverter for configuration key.converter: Class io.apicurio.registry.utils.converter.AvroConverter could not be found.
    connect      |     at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:728)
    connect      |     at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
    connect      |     at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
    connect      |     at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
    connect      |     at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
    connect      |     at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:385)
    connect      |     at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:379)
    connect      |     at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
    connect      |     at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
    1 reply
    Robert B. Hanviriyapunt

    hi everyone, i'm getting the following error with a MySQL connector

    Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group.

    has anyone seen this before? can this be the cause of my MySQL connector suddenly not working?

    vaibhav pandey
    is there any property to change key type? like text (Struct{id=1881693}) to json ({id=1881693})

    Hi all, I'm using Debezium Oracle source connector with Avro. The goal is to fetch data and schema and move it to other Oracle DB with exactly the same schema.
    Unfortunately I have special character in column name and Avro doesn't like it.
    I tried the

        "transforms": "RenameField",
        "transforms.RenameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
        "transforms.RenameField.renames": "COL_VAL#:COL_VAL_",

    but it doesn't work, still failing with org.apache.avro.SchemaParseException: Illegal character in: COL_VAL#
    Could you please advice?
    Below the excerpt from the schema (I can attach whole if needed)

      "connect.name": "my_topic.Envelope",
      "fields": [
          "default": null,
          "name": "before",
          "type": [
              "connect.name": "my_topic.Value",
              "fields": [
                  "name": "COL_VAL_1",
                  "type": "string"
                  "name": "COL_VAL_CNT,
                  "type": "string"
                  "name": "COL_VAL#",
                  "type": "string"
              "name": "Value",
              "type": "record"
          "default": null,
          "name": "after",
          "type": [
          "name": "source",
          "type": {
            "connect.name": "io.debezium.connector.oracle.Source",
            "fields": [
            "name": "Source",
            "namespace": "io.debezium.connector.oracle",
            "type": "record"
          "name": "op",
          "type": "string"
          "default": null,
          "name": "ts_ms",
          "type": [
          "default": null,
          "name": "transaction",
          "type": [
              "fields": [
                  "name": "id",
                  "type": "string"
                  "name": "total_order",
                  "type": "long"
                  "name": "data_collection_order",
                  "type": "long"
              "name": "ConnectDefault",
              "namespace": "io.confluent.connect.avro",
              "type": "record"
      "name": "Envelope",
      "namespace": "my_topic",
      "type": "record"
    Hi guys, how detect when schema changed in source table?
    hi experts, i found that when i use debezium pubsub server, the topic will NOT be created automatically, do i miss some configurations or this is by design?
    Hi everyone, column adding works to table (cdc works), but when try delete column its says:
    Unable to find fields [SinkRecordField{schema=Schema{STRING}, name='test', isPrimaryKey=false}] among column names [id, sum_cost, transaction_ts] [io.confluent.connect.jdbc.sink.DbStructure]
        "name": "jdbc-sink",
        "config": {
            "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
            "topics.regex": "sink_db.public.(.*)",
            "connection.url": "jdbc:postgresql://db_slave:5432/sink_db?user=postgresuser&password=postgrespw",
            "transforms": "unwrap",
            "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
            "transforms.unwrap.drop.tombstones": "false",
            "auto.create": "true",
            "auto.evolve": "true",
            "insert.mode": "upsert",
            "delete.enabled": "true",
            "pk.fields.regex": "(.*)id",
            "pk.mode": "record_key"
        "name": "pg-get-data-connector",
            "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
            "database.hostname": "db_master",
            "database.port": 5432,
            "database.user": "postgresuser",
            "database.password": "postgrespw",
            "database.dbname" : "db_master",
            "database.server.name": "sink_db",
            "plugin.name": "pgoutput",
            "table.include.list": "public.(.*)",
            "poll.interval.ms": "1000",
            "schema.whitelist": "public"
    Hi everyone. I just implemented Debezium/Server Standalone. I wanna to get logs and send them to Datadog. My first problem is...how to get the logs for Debexium/Server Standalone? If we deployed Debezium on top Kafka Connect we can get the logs from JMX Metrics, right? Anyone have experience to deal with Debezium/Server Standalone 's logs? Thanks a lot.
    Hello, I would like to use debezium with apache beam and postgressql. Someone knows about some tutorial about it
    Hi Team,
    We are using DB2 debezium and we have a clob column. While streaming the records from DB to Kafka topic we are getting com.ibm.db2.jcc.am.c_@2d336dda. Can anyone help me to resolve this.

    Hi Team,
    We are using DB2 debezium and we have a clob column. While streaming the records from DB to Kafka topic we are getting com.ibm.db2.jcc.am.c_@2d336dda. Can anyone help me to resolve this.

    Can anyone please help me on this. Was stucked here


    Good afternoon Team!

    At the moment we are using Debezium MySql against Aurora.
    GTID mode in MySql is set to OFF_PERMISSIVE, so considered enabled as of this check in MySqlConnection#isGtidModeEnabled:
    return !"OFF".equalsIgnoreCase(rs.getString(2)); - so it considers anything not OFF to be ON.

    Upon restarting the process we eventually end up with the error:
    The replication sender thread cannot start in AUTO_POSITION mode: this server has GTID_MODE = OFF_PERMISSIVE instead of ON

    In other words - Debezium considers OFF_PERMISSIVE as enabled, but in order to progress it eventually (transitively, as the error comes from some sql client library) needs this to be ON.

    So basically (at least from what I can tell) we can't move forward from this point without changing the configuration on Aurora.
    If I'm wrong on this - please correct me, would be happy!

    I believe this situation could be omitted if the usage of GTID at all was configurable on the Debezium side (as it's an optimization after all, not something required for Debezium to work).

    Have anyone encountered this issue before?


    Hi, Can anyone share how does debezium maintain Postgres connection?

    The documention shares the CDC logic, whereas I want to know how the connection b/w connector and Postgres host are established.

    More in lines of is it a pool connection - does the connector use a heartbeat signal to determine if the RDS is up?

    Aravindan C
    Hi all, I'm using Debezium server to capture changes from PostgreSQL.
    Debezium uses PostgreSQL's logical replication slot to capture changes which already remembers the LSN until which the connector has replicated.
    Is it mandatory to have a FileOffsetBackingStore to record the connector offsets? Is there a way to skip it and just rely on PostgreSQL's data?
    1 reply
    Hi, Can I listen and get mysql views data through debezium?
    1 reply
    Luan Chanh Tran
    Hello all, could you help me clearly with this part https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-is-stopped-for-a-duration?
    In case I stopped the debezium and insert some data in Postgres SQL after that I restart it but at that time I did not see debezium catch up data change sent to Kafka . Is there any missing config in this code below:
    public io.debezium.config.Configuration postgresConnector() throws IOException {
    File offsetStorageTempFile = File.createTempFile("offsets", ".dat");
    File dbHistoryTempFile = File.createTempFile("dbhistory
    ", ".dat");
    return io.debezium.config.Configuration.create()
    .with("name", "pg-connector")
    .with("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
    .with("offset.storage.file.filename", offsetStorageTempFile.getAbsolutePath())
    .with("offset.flush.interval.ms", "0")
    .with("database.hostname", postgresDbHost)
    .with("database.port", postgresDbPort)
    .with("database.user", postgresDbUsername)
    .with("database.password", postgresDbPassword)
    .with("database.dbname", postgresDbName)
    .with("database.include.list", postgresDbName)
    .with("database.server.name", "PostgreSQL")
    .with("plugin.name", "pgoutput")
    .with("table.whitelist", "public.t_se_interface,public.g_individu")
    .with("database.history", "io.debezium.relational.history.FileDatabaseHistory")
    .with("database.history.file.filename", dbHistoryTempFile.getAbsolutePath())
    "name": "emp-connector",
    "config": {
    "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
    "tasks.max": "1",
    "database.hostname": "postgres",
    "database.port": "5432",
    "database.user": "postgres",
    "database.password": "postgres",
    "database.dbname" : "emp",
    "database.server.name": "localhost",
    "database.whitelist": "emp",
    "database.history.kafka.bootstrap.servers": "kafka:9092",
    "database.history.kafka.topic": "schema-changes.emp"
    {"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nError while validating connector config: The connection attempt failed.\nYou can also find the above list of errors at the endpoint /connector-plugins/{connectorType}/config/validate"}
    I am trying to follow this: https://hevodata.com/learn/connecting-kafka-to-postgresql/#m1 and i get issue when i try to start the connector
    I had a few issues with the postgres where i had to setup the env variable. I have even reset the postgres password. just not sure what can be changed. should i use ip instead of localhost?
    azureuser@app:~/strimzi-kafka-operator/templates$ kubectl get kctr inventory-connector -o yaml
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnector
    kubectl.kubernetes.io/last-applied-configuration: |
    creationTimestamp: "2022-06-15T08:18:16Z"
    generation: 1
    strimzi.io/cluster: my-connect-cluster-debezium
    name: inventory-connector
    namespace: default
    resourceVersion: "627127"
    uid: f416dbd4-6781-48f4-8b4d-6f4ae6c0d439
    class: io.debezium.connector.mysql.MySqlConnector
    database.allowPublicKeyRetrieval: "true"
    database.history.kafka.bootstrap.servers: fqcevent.servicebus.windows.net:9093
    database.history.kafka.topic: schema-changes.inventory
    database.hostname: #########
    database.password: #########
    database.port: "3306"
    database.server.id: "1"
    database.server.name: app
    database.user: root
    database.whitelist: inventory
    include.schema.changes: "true"
    tasksMax: 1
    - lastTransitionTime: "2022-06-15T08:18:18.031523Z"
    message: 'GET /connectors/inventory-connector/topics returned 404 (Not Found):
    Unexpected status code'
    reason: ConnectRestException
    status: "True"
    type: NotReady
    observedGeneration: 1
    tasksMax: 1
    topics: []
    Shantnu Jain

    I have the following setup
    Oracle -> Kafka -> PostgreSQL
    Source Connector config is

            "include.schema.changes": "true",
            "time.precision.mode": "connect",
        } }

    Sink connector config is

        "name": "myjdbc-sink-testdebezium",
        "config": {
            "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
            "tasks.max": "1",
            "topics.regex": "oracle19.C__DBZUSER.*",
            "connection.url": "jdbc:postgresql://",
            "dialect.name": "PostgreSqlDatabaseDialect",
            "auto.create": "true",
            "auto.evolve": "true",
            "insert.mode": "upsert",
            "delete.enabled": "true",
            "transforms": "unwrap, RemoveString, TimestampConverter",
            "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
            "transforms.unwrap.delete.handling.mode": "none",
            "transforms.RemoveString.type": "org.apache.kafka.connect.transforms.RegexRouter",
            "transforms.RemoveString.regex": "(.*)\\.C__DBZUSER\\.(.*)",
            "transforms.RemoveString.replacement": "$2",
            "transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
            "transforms.TimestampConverter.target.type": "Timestamp",
            "transforms.TimestampConverter.field": "dob",
            "pk.mode": "record_key"

    Now when I drop a table in Oracle I get an entry in schema_changes topic but the table is not dropped from PostgreSQL. Need help in figuring out the issue why drop is not getting propogated. Just FYI, all the other operations i.e. Create Table, Alter Table, Insert, Update, Delete are working fine. Only DROP is not working and I am not getting any exception either.

    David Daniel Arch
    hi all, I've setup MSK Connect with Debezium but forgot to attach to the connector the configuration that adds the key.converter and value.converter configs set to JsonConverter. All the topics have already being populated with my Postgres data. Does anybody know if changing the converter nows means I need to sync Postgres with Kafka again?
    Daan Bosch
    Is there a way to use authentication when connecting to Postgres? Setting trust in pg_hba.conf is not secure according to this blog. https://medium.com/@lmramos.usa/debezium-cdc-postgres-c9ce4da05ce1
    Is there a way to make sure scram authentication is used? or md5?

    hello, I'm trying to setup Debezium with Oracle and I'm following this guide: https://github.com/debezium/debezium-examples/tree/main/tutorial#using-oracle

    I have the various components running (Oracle, Kafka, etc) and when I try to register the Debezium Oracle connector I get this error:

    {"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect: Failed to resolve Oracle database version\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
    I can't find anything wrong in the configuration and I can connect to the dockerized Oracle on localhost:1521 without problems
    for reference this is the config that I'm pushing to the connector:
      "name": "inventory-connector",
      "config": {
        "connector.class": "io.debezium.connector.oracle.OracleConnector",
        "tasks.max": "1",
        "database.server.name": "server1",
        "database.hostname": "localhost",
        "database.port": "1521",
        "database.user": "c##dbzuser",
        "database.password": "dbz",
        "database.dbname": "ORCLCDB",
        "database.pdb.name": "ORCLPDB1",
        "database.connection.adapter": "logminer",
        "database.history.kafka.bootstrap.servers": "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory"
    1 reply