Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Albert Bikeev
    @kell18

    Hi people!
    Is it possible to receive only DDL updates from Debezium? Ie discard <dbserver>.<specificTableDML> Kafka topic and not waste resources on it

    I haven't seen Gitter chat and posted one question on SO: https://stackoverflow.com/questions/64392897/is-it-possible-to-receive-only-ddl-updates-from-debezium

    10 replies
    YAK_1979
    @1979Yak_twitter
    this is not in the tutorial yet, but is this how you register the oracle/logminer connector?
    curl -i -X PUT -H "Accept:application/json" \
        -H  "Content-Type:application/json" http://localhost:8083/connectors/source-oracle-dbz-logminer/config \
        -d '{
            "connector.class": "io.debezium.connector.oracle.OracleConnector",
            "tasks.max" : "1",
            "database.server.name" : "server1",
            "database.hostname" : "localhost",
            "database.port" : "1521",
            "database.user" : "c##logminer",
            "database.password" : "ls",
            "database.dbname" : "ORCLCDB",
            "database.pdb.name" : "ORCLPDB1",
            "database.connection.adapter": "logminer"
            "database.history.kafka.bootstrap.servers" : "kafka:9092",
            "database.history.kafka.topic": "schema-changes.inventory"
            }'
    1 reply
    when using this, I get Caused by: java.sql.SQLException: Stream has already been closed
    Chris Cranford
    @Naros
    In short, the stream already closed is caused by having a default value of type NUMERIC (LONG/LONGRAW) on a table. As pointed out in DBZ-2624, there is a workaround you can use by setting the oracle.jdbc.useFetchSizeWithLongColumn=true but there are other outstanding bugs with the LogMiner implementation you might hit as well.
    I'm hoping that when we do 1.3.1.Final and 1.4.0.Alpha1 in the coming weeks, the implementation should be more stable.
    ruslan
    @unoexperto
    Hi everyone! I'm trying to use pgoutput plugin for getting data changes from PSQL and for some reason I see only BEGIN and COMMIT messages in replication slot. Could you please suggest what I'm missing ?
    Chris Cranford
    @Naros
    @unoexperto Can you make sure that the REPLICA IDENTITY on the tables you're monitoring are set to FULL ?
    ruslan
    @unoexperto

    @Naros Thank you. I didn't know about this property. I tried to alter it for one table like this

    alter table customer_log replica identity full;

    But UPDATE query still results only in BEGIN and COMMIT events :(

    I figured it out. I thought create publication pub1 creates publication for all tables. Turned out it has to be create publication pub1 for all tables. Sorry for false alarm!
    Chris Cranford
    @Naros
    @unoexperto np, glad you got it sorted out.
    Rap70r
    @Rap70r
    Hello, how can I stop a running kafka connect worker (connect-distributed.sh)?
    2 replies
    Idrees Mohammed
    @imohammed-pt_gitlab
    Hello all can anyone help me with this issue I am facing with Sql connector using debezium
    org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
    connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
    connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
    connect_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    connect_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    connect_1 | at java.base/java.lang.Thread.run(Thread.java:834)
    connect_1 | Caused by: org.apache.kafka.connect.errors.DataException: BigDecimal has mismatching scale value for given Decimal schema
    connect_1 | at org.apache.kafka.connect.data.Decimal.fromLogical(Decimal.java:68)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter$13.toJson(JsonConverter.java:206)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:606)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:693)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:693)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:581)
    connect_1 | at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:335)
    connect_1 | at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62)
    connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:290)
    connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
    connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
    connect_1 | ... 11 more
    If anyone can help in this case please respond, this did not happen in UAT/Testing but once services launched in Production I am seeing this issue.
    Stefan Cardenas
    @S-Cardenas

    I’m seeing the following error consistently across multiple environments while connecting a Debezium Postgres Connector to an RDS Aurora Postgres Cluster. Any idea what could be causing the database connection to fail when reading from copy?

    /16/2020 10:10:35 AM2020-10-16 14:10:35,312 INFO Postgres|Replicate|postgres-connector-task Searching for WAL resume position [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource] 10/16/2020 10:10:35 AM2020-10-16 14:10:35,312 INFO Postgres|Replicate|postgres-connector-task First LSN ‘LSN{D/497826A8}’ received [io.debezium.connector.postgresql.connection.WalPositionLocator] 10/16/2020 10:10:39 AM2020-10-16 14:10:39,480 ERROR Postgres|Replicate|postgres-connector-task Producer failure [io.debezium.pipeline.ErrorHandler] 10/16/2020 10:10:39 AMorg.postgresql.util.PSQLException: Database connection failed when reading from copy 10/16/2020 10:10:39 AM at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1102) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:42) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:158) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:123) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:80) 10/16/2020 10:10:39 AM at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:460) 10/16/2020 10:10:39 AM at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.searchWalPosition(PostgresStreamingChangeEventSource.java:253) 10/16/2020 10:10:39 AM at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:125) 10/16/2020 10:10:39 AM at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:108) 10/16/2020 10:10:39 AM at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 10/16/2020 10:10:39 AM at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 10/16/2020 10:10:39 AM at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 10/16/2020 10:10:39 AM at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 10/16/2020 10:10:39 AM at java.base/java.lang.Thread.run(Thread.java:834) 10/16/2020 10:10:39 AMCaused by: java.net.SocketException: Socket is closed 10/16/2020 10:10:39 AM at java.base/java.net.Socket.setSoTimeout(Socket.java:1155) 10/16/2020 10:10:39 AM at java.base/sun.security.ssl.BaseSSLSocketImpl.setSoTimeout(BaseSSLSocketImpl.java:639) 10/16/2020 10:10:39 AM at java.base/sun.security.ssl.SSLSocketImpl.setSoTimeout(SSLSocketImpl.java:73) 10/16/2020 10:10:39 AM at org.postgresql.core.PGStream.hasMessagePending(PGStream.java:148) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1144) 10/16/2020 10:10:39 AM at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1100) 10/16/2020 10:10:39 AM … 13 more

    3 replies
    biubiubiu
    @amazingSaltFish
    why my debezium's log like this,it's seem like can't catch data from binlog
    [2020-10-17 10:33:22,093] INFO Creating thread debezium-mysqlconnector-binlog-client (io.debezium.util.Threads:287)
    [2020-10-17 10:33:22,228] INFO Connected to MySQL binlog at 192.168.1.10:3306, starting at GTIDs ac6d9646-a770-ee15-5378-b5bd11154b2e:1-71442465 and binlog file 'mysql-bin.000054', pos=823855872, skipping 0 events plus 0 rows (io.debezium.connector.mysql.BinlogReader:1111)
    [2020-10-17 10:33:22,269] INFO Stopped reading binlog after 0 events, no new offset was recorded (io.debezium.connector.mysql.BinlogReader:1099)
    [2020-10-17 10:33:25,223] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:33:25,223] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:33:35,223] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:33:35,224] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:33:45,224] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:33:45,224] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:33:55,224] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:33:55,225] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:05,225] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:05,225] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:15,225] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:15,226] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:22,229] INFO Creating thread debezium-mysqlconnector-binlog-client (io.debezium.util.Threads:287)
    [2020-10-17 10:34:22,360] INFO Connected to MySQL binlog at 192.168.1.10:3306, starting at GTIDs ac6d9646-a770-ee15-5378-b5bd11154b2e:1-71442465 and binlog file 'mysql-bin.000054', pos=823855872, skipping 0 events plus 0 rows (io.debezium.connector.mysql.BinlogReader:1111)
    [2020-10-17 10:34:22,400] INFO Stopped reading binlog after 0 events, no new offset was recorded (io.debezium.connector.mysql.BinlogReader:1099)
    [2020-10-17 10:34:25,226] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:25,226] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:35,226] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:35,227] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:45,227] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:45,227] INFO WorkerSourceTask{id=uat_env-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441)
    [2020-10-17 10:34:55,227] INFO WorkerSourceTask{id=uat_env-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)
    [2020-10-17 10:34:55,228] INFO WorkerSourceTask{id=uat_env-
    1 reply
    Clayton Boneli
    @claytonbonelli
    Is possible to write comments into json configuration?
    2 replies
    Ekansh Bansal
    @Ace-Bansal
    Hello folks, has someone here worked with Debezium and the BigQuery connector built by WePay? Can someone tell me whether dropping and modifying columns is allowed using that sink connector (since BigQuery doesn't allow dropping and modifying columns in an existing table). I've tested the edge case of adding columns in my database and that gets reflected in my BigQuery warehouse, however I'm not able to delete and modify columns since the BigQuery connector is throwing an error which says that the provided schema isn't correct (after I drop/modify the columns)
    hanoisteve
    @hanoisteve
    I have a debezium mysql connector configured for outbox event routing. Going into UAT and Prod soon so would like to know about recommended settings for production vs development. Is there like a checklist or top ten settings for going into production? I am concerned with things getting out of sync or a connector failing and not knowing about it in time. How to recover or at least reset in these situation. Is there a production article or guide?
    Lets say I backup the binlogs but something gets out of sync I need procedures for these things in place
    What are like the top ten things to prepare for in production from those actually running these connectors please
    Also what about HA and a Debezium mysql connector? Does connect offer alerts on failures?
    duycuong87vn
    @duycuong87vn
    @jpechane , one question, Can i setup ElasticsearchSinkConnector without auto create index when connect to ElasticSearch V7. ?
    I want to use ES index with ingest pipeline that create before.
    Leo Alves
    @0l0r1n
    Hello everyone! I am looking into running embedded Debezium on ECS. I have a question about two specific config properties: offset.storage.file.filename and database.history.file.filename. Is it possible to use something else other than the file system? For example, DynamoDB records or S3?
    Giovanni De Stefano
    @zxxz_gitlab
    Hello all, I am new to debezium and perhaps I misunderstood some basic concepts hence I would need some guidance... Following this guide https://debezium.io/documentation/reference/1.2/development/engine.html, I successfully setup a microservice with Debezium engine embedded and Postgres connector: I can happily handle CDC from the postgres db (no Kafka, Kafka Connect, or Zookeeper processes). Now I would like to do the same for another microservice but using the sqlserver connector: here I am stuck as the connector needs database.history.kafka.bootstrap.servers and I can't find any info on what value to use when the engine is embedded. What am I missing here?
    Alok Kumar Singh
    @alok87
    table.whitelist: "^db.estab_accreditations$,^db.estab_accreditations_published$" the first table db.estab_accreditations is getting ignored. Please suggest.
    17 replies
    Alok Kumar Singh
    @alok87
    2020-10-16 11:56:02,044 INFO 'db.estab_accreditations' is filtered out of capturing (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-ts-snapshot]
    r_mohan
    @r_mohan_twitter
    I came here because we have what seems to be a typical problem. But the patterns are obvious. Microservices send events to other microservices. When there are hundreds of millions of rows we seem to have problems with jsonb. I believe jsonb is not the only problem. Table partitioning or something like that can help. Not sure. But events are stored in the source microservice. Since it is AWS RDS it seems the WAL logs can be streamed to Kinesis.
    But that seems couple with PostgreSql. How is it done with pg ? What if we switch to DynamoDB ?
    I meant that patterns are not obvious because events are published using Spring Boot. Hard to do if we publish 3 or 4 events to AWS SQS after we persist a record.
    Giovanni De Stefano
    @zxxz_gitlab
    @jpechane Thanks! I added the two parameters and now it starts up. However, database.history.file.filename must exist upfront. I checked the code of FileDatabaseHistory and perhaps there is a bug (starting at line 77): if history file does not exist, if the parent is not null then create parent directory (that doesn't seem right) and only afterwards create history file. I can workaround it by touching the file beforehand but I thought of reporting it here in case it's an actual bug.
    This is the code I am referring to:
    FileDatabaseHistory.java
    
    // Make sure the file exists ...
                        if (!storageExists()) {
                            // Create parent directories if we have them ...
                            if (path.getParent() != null) {
                                Files.createDirectories(path.getParent());
                            }
                            try {
                                Files.createFile(path);
                            }
                            catch (FileAlreadyExistsException e) {
                                // do nothing
                            }
                        }
    1 reply
    Albert Bikeev
    @kell18
    Hello,
    Continuing question about "pulling only DDL statements, but not DML" - do you know if for configuration with filter topic === <dbserver> (filter to get only DDLs), it's also possible to configure Debezium to not pull row-based binlog from MySQL? Or maybe it'd not pull it anyways with such a filter?
    We're up-to using your nice tool, but we're concerned about the costs for our use-case. Ie. if would still spend resources on pulling DMLs with mentioned filtering
    2 replies
    Ie. if this filtering propagated to MySQL binlog consumer? Or it's still receiving all the DML statements on Kafka Connect side and then filter them out?
    Chris Cranford
    @Naros
    @kell18 The binlog events will be deserialized one-by-one and processed by Debezium. What the Filter SMT does is right before the connector emits the events to the Kafka broker, the SMT checks the contents and determiens whether event should be propagated or removed from the event stream based on its configuration.
    1 reply
    Ikenna Darlington Ogbajie
    @idarlington

    Hi, we have use-case that requires re-triggering snapshots. What's the recommended way to re-trigger snapshots for a redeployment?

    Our snapshots are currently set to exported for postgres. Is changing the slot.name an option, that will mean deleting the replication slot manually.

    3 replies
    shab12br
    @sha12br
    Hi Folks,
    I have a query on the debezium mysql connector..what property should be added in the debezium kafka connect config to read the replication from latest, by default when I load up a connector it reads entirely
    Could.anybody please help me on this.
    Chris Cranford
    @Naros
    3 replies
    Clayton Boneli
    @claytonbonelli
    I'm SINKing my Postgres database with BigQuery and it's working, but when I insert, update or delete the row (Postgres), it appears in BigQuery 3 rows: one with the insert command, another with the update command and the last with delete. Could anyone tell me how to solve this problem?
    1 reply
    Mihai Bucica
    @mihai.bucica_gitlab
    Did anyone tried to use Debezium Embedded with Cloudevents/Camel & RabbitMQ instead of the standard Kafka way? We are already using RabbitMQ and having another moving part like Kafka in our enviroment its not easy to mantain.....
    1 reply
    Sergei Morozov
    @morozov

    What's the best way to exclude certain columns in MySQL Debezium based on the column type? Specifically, I want to exclude all TEXT and BLOB columns since they don't contribute to the row size and thereby may produce a message of the size larger than the broker allows. Additionally, BLOB columns seem to be not affected by column.truncate.to.length.chars.

    The SMT approach probably won't work because as far as I understand, an SMT will be given only the payload without the schema.

    The schema is generic, so excluding columns by name won't work.

    1 reply
    shab12br
    @sha12br

    Hi Folks, i have a query --> i have been trying to connect to AWS RDS Read Replica server from debezium connect , but encountered an issue as below :
    INFO Failed testing connection for jdbc:mysql://hostname_read_replica:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 't2s_data_db' (io.debezium.connector.mysql.MySqlConnector:105)

    Can anybody please help me on this..?

    Jiri Pechanec
    @jpechane
    @sha12br Hi, that's usually incorrect hostname/port and/or credentials
    10 replies
    Jonathan Winandy
    @ahoy-jon
    Hi all, I have a repeating problem on debezium oracle with missing schemas, how can I create an issue on https://issues.redhat.com/projects/DBZ ? I just log in ?
    Jiri Pechanec
    @jpechane
    @ahoy-jon Hi, yes
    shab12br
    @sha12br
    Hi Folks, can anybody help me with this issue -- > "org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:265)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:319)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:247)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 2301705 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.\n"
    duycuong87vn
    @duycuong87vn

    hi all, I got error when create connector, (1.0)

    ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,729 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,729 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,730 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,730 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,730 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,731 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,731 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,731 WARN   ||  Expected Envelope for transformation, passing it unchanged   [io.debezium.transforms.SmtManager]
    connect_1    | 2020-10-20 09:40:10,731 INFO   ||  Attempting to open connection #1 to PostgreSql   [io.confluent.connect.jdbc.util.CachedConnectionProvider]
    connect_1    | 2020-10-20 09:40:10,780 INFO   ||  JdbcDbWriter Connected   [io.confluent.connect.jdbc.sink.JdbcDbWriter]
    connect_1    | 2020-10-20 09:40:10,785 ERROR  ||  WorkerSinkTask{id=location-des-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted.   [org.apache.kafka.connect.runtime.WorkerSinkTask]
    connect_1    | java.lang.NullPointerException

    please help to find where is the problem ,

    Jonathan Winandy
    @ahoy-jon
    @jpechane thanks