Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Massimo Di Cerbo
    @IIdikII_twitter
    Hi all, I've read in debezium roadmap DB2 support, what flavor are supported? Any chance for DB2 AS400 (UDB)?
    Alexander Ryzhenko
    @aryzhenko
    hi all. My MySQL connector was working for a 3 weeks without any issues, but last 3 days it falls. I cant handle it. Anyone can help? Thanks.
    [2019-12-11 10:54:57,598] INFO WorkerSourceTask{id=mysql_connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:398)
    [2019-12-11 10:54:57,598] INFO WorkerSourceTask{id=mysql_connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:415)
    [2019-12-11 10:54:57,600] INFO WorkerSourceTask{id=mysql_connector-0} Finished commitOffsets successfully in 2 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:497)
    [2019-12-11 10:55:06,527] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,529] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,530] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,533] INFO predicate returned false; completing reader oldBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,536] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,536] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,538] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,540] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,543] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,544] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,546] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,548] INFO predicate returned false; completing reader newBinlog (io.debezium.connector.mysql.BinlogReader:339)
    [2019-12-11 10:55:06,549] INFO Discarding 0 unsent record(s) due to the connector shutting down (io.debezium.connector.mysql.BinlogReader:129)
    [2019-12-11 10:55:06,551] INFO Stopped reading binlog after 0 events, no new offset was recorded (io.debezium.connector.mysql.BinlogReader:1015)
    [2019-12-11 10:55:06,561] INFO Stopping the oldBinlog reader (io.debezium.connector.mysql.ParallelSnapshotReader:122)
    [2019-12-11 10:55:06,562] INFO Discarding 0 unsent record(s) due to the connector shutting down (io.debezium.connector.mysql.BinlogReader:129)
    [2019-12-11 10:55:06,562] INFO Stopped reading binlog after 543560 events, last recorded offset: {table_whitelist=db.t1,db.t2, ts_sec=1576061708, file=bin.000942, table_blacklist=null, pos=165956570, database_whitelist=null, database_blacklist=null, gtids=3b16c742-1183-11e8-8cb4-3497f65a102f:1-8583731083:8583731085-8632659775:8632659777-8916110701,4436d449-b25d-11e8-beb2-2c4d54466ca9:1-1977,443731d5-cf5e-11e7-9479-2c4d54466ca9:1-282778390,dfe71167-ee84-11e9-8042-107b44b03576:1-100531, row=1, server_id=267, event=2} (io.debezium.connector.mysql.BinlogReader:1013)
    [2019-12-11 10:55:06,562] INFO [Producer clientId=mysql_connector-dbhistory] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
    [2019-12-11 10:55:06,565] INFO Stopping the chained reader (io.debezium.connector.mysql.ParallelSnapshotReader:130)
    [2019-12-11 10:55:06,565] ERROR Unable to unregister the MBean 'debezium.mysql:type=connector-metrics,context=schema-history,server=prod_replica' (io.debezium.relational.history.DatabaseHistoryMetrics:65)
    [2019-12-11 10:55:06,565] INFO Transitioning from the parallelSnapshotReader reader to the reconcilingBinlogReader reader (io.debezium.connector.mysql.ChainedReader:199)
    [2019-12-11 10:55:06,565] INFO old tables leading; reading only from new tables (io.debezium.connector.mysql.ReconcilingBinlogReader:183)
    [2019-12-11 10:55:06,565] INFO Requested thread factory for connector MySqlConnector, id = prod_replica named = binlog-client (io.debezium.util.Threads:250)
    [2019-12-11 10:55:06,567] INFO WorkerSourceTask{id=mysql_connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:398)
    [2019-12-11 10:55:06,567] INFO WorkerSourceTask{id=mysql_connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:415)
    [2019-12-11 10:55:06,568] INFO WorkerSourceTask{id=mysql_connector-0} Finished commitOffsets successfully in 1 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:497)
    [2019-12-11 10:55:06,568] ERROR WorkerSourceTask{id=mysql_connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
    org.apache.kafka.connect.errors.ConnectException: Unexpected error while connecting to MySQL and looking at GTID mode:
        at io.debezium.connector.mysql.MySqlJdbcContext.isGtidModeEnabled(MySqlJdbcContext.java:175)
        at io.debezium.connector.mysql.BinlogReader.doStart(BinlogReader.java:327)
        at io.debezium.connector.mysql.AbstractReader.start(AbstractReader.java:116)
        at io.debezium.connector.mysql.ReconcilingBinlogReader.start(ReconcilingBinlogReader.java:101)
        at io.debezium.connector.mysql.ChainedReader.startNextReader(ChainedReader.java:203)
        at io.debezium.connector.mysql.ChainedReader.readerCompletedPolling(ChainedReader.java:157)
        at io.debezium.connector.mysql.ParallelSnapshotReader.completeSuccessfully(ParallelSnapshotReader.java:175)
        at io.debezium.connector.mysql.ParallelSnapshotReader.poll(ParallelSnapshotReader.java:164)
        at io.debezium.connector.mysql.ChainedReader.poll(ChainedReader.java:145)
        at io.debezium.connector.mysql.MySqlConnectorTask.poll(MySqlConnectorTask.java:416)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
    Caused by: com.mysql.cj.exceptions.CJCommunicationsException: The last packet successfully received from the server was 6,750,846 milliseconds ago.  The last packet sent successfully to the server was 6,750,846 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
        at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
        at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
        at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
        at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
        at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:562)
        at com.mysql.cj.protocol.a.NativeProtocol.checkErrorMessage(NativeProtocol.java:732)
        at com.mysql.cj.protocol.a.NativeProtocol.sendCommand(NativeProtocol.java:671)
        at com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:986)
        at com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:921)
        at com.mysql.cj.NativeSession.execSQL(NativeSession.java:1165)
        at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1186)
        ... 21 more
    Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
        at com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:67)
        at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:63)
        at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:45)
        at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:52)
        at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:41)
        at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:54)
        at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:44)
        at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:556)
        ... 27 more
    [2019-12-11 10:55:06,569] ERROR WorkerSourceTask{id=mysql_connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)
    [2019-12-11 10:55:06,569] INFO Stopping MySQL connector task (io.debezium.connector.mysql.MySqlConnectorTask:430)
    [2019-12-11 10:55:06,569] INFO [Producer clientId=connector-producer-mysql_connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
    [2019-12-11 10:55:07,601] INFO WorkerSourceTask{id=mysql_connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:398)
    [2019-12-11 10:55:07,601] INFO WorkerSourceTask{id=mysql_connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:415)
    Jiri Pechanec
    @jpechane
    @aryzhenko Hi, could you please post rest of the stactrace?
    Alexander Ryzhenko
    @aryzhenko
    I only have this in log files
    after manual task restart via REST API it works good for few hours
    Jiri Pechanec
    @jpechane
    Alexander Ryzhenko
    @aryzhenko
    I'll ask our devops. few minutes..
    They say wait_timeout = 120
    Jiri Pechanec
    @jpechane
    @aryzhenko That's two minutes
    Alexander Ryzhenko
    @aryzhenko
    Thanks. BTW we have a network problems in our DC when it fall last time. Can it be related?
    Jiri Pechanec
    @jpechane
    @aryzhenko Yes!
    Alexander Ryzhenko
    @aryzhenko
    I didn't knew about network issuses before I ask uestion :)
    So when there is network down for a seconds, task in connector will fail and do not up automatically? Is there an option to enable auto-task-restart or something like that?
    Jiri Pechanec
    @jpechane
    @aryzhenko Unfrotunately no, there is issue in Kafka Connect opened for this puporse
    But it is not doen yet
    Alexander Ryzhenko
    @aryzhenko
    Thank you so musch. You are the best!
    Jiri Pechanec
    @jpechane
    @aryzhenko If only my kids knew that ;-)
    Jos Huiting
    @jhuiting
    Hi, I was wondering what the easiest way is to add default Kafka Connect SMT's to Debezium (e.g. in a Docker image). I really like the ExtractNewRecordState transform but would like to combine that with the default ExtractTopic transform from Kafka Connect
    The functionality I'm missing now in Debezium is to determine the topic based on a field
    Jos Huiting
    @jhuiting
    I guess that copying the relevant JAR to the plugin directory should do the trick? :-) Or is this something that is not advisable?
    Chris Cranford
    @Naros
    @jhuiting If the SMT is part of Kafka Connect's runtime by default, then I would expect you simply need to add the transform declarations to the configuration and it would "just work".
    I believe the SMT you're referring to is actually part of Confluent though, right?
    Since it looks like this is something extra that you are to install separately, then yes you're correct about manually copying the jar.
    Jos Huiting
    @jhuiting
    Yeah, it's part of Confluent but not installed by default.
    Thanks, then I'll add this SMT manually to my local Docker image
    I tried without installing it, but then Kafka Connect complains that the class is missing.
    Chris Cranford
    @Naros
    @jhuiting Without looking at the source of the SMT, there may be other dependencies you might need.
    Jos Huiting
    @jhuiting
    Good suggestion, I'll check that as well if I can find the source :-)
    jgavin
    @jgavin
    Hi, was wondering about our options in the case that we want Debezium to take a fresh snapshot of only one particular table. We are using the Postgres connector and it looks as though there is an option to have Debezium take a snapshot on every restart, but we would like to avoid taking a snapshot of every table we are monitoring. The context would be that we have had cdc data captured by Debezium expire out of Kafka before they could be captured by downstream processors.
    Avinash Bhardwaj
    @avinashb98
    Hello everyone, can anyone point me to any relevant resource that has information on running kafka connect with debezium in production?
    Punith13
    @Punith13

    Hi Everyone, I was trying to connect to instaclustr kafka through my kafka connect and i had to load the truststore.jks into the container - So i created the below config for dockerfile :

    FROM debezium/connect:0.1

    COPY truststore.jks /etc/kafka/secrets/truststore.jks

    COPY register-mysql.json .

    CMD ["curl" , "-i", "-X", "POST", "-H", "\"Accept:application/json\"", "-H" , "\"Content-Type:application/json\"", "http://localhost:8083/connectors/" , "-d" , "@register-mysql.json"]

    The docker-compose exits with- as curl: (52) Empty reply from server. The BOOTSTRAP_SERVER is pointing to kafka running in instaclustr . Need help figuring this out.

    Guy Davenport
    @daveneti
    @jgavin we had a similar use case while using Kafka as an event store from a SQLServer database with the SQL Connector. We set the topic retention.ms to be very big and not the default 7 days.
    Guy Davenport
    @daveneti
    Hi, does anyone one know what happens when you remove a connector (e.g. SQL Server Connector) and re add it? Does it continue from the last change event, or does it start with a new snapshot?
    Jiri Pechanec
    @jpechane
    @avinashb98 Hi, what kind of information are you loooking for?
    @daveneti Hi, unless you remove offsets (and database history topic) then after readding it will continue from the position where it stopped
    Guy Davenport
    @daveneti
    @jpechane Thanks, I am going to try that.
    Avinash Bhardwaj
    @avinashb98
    @jpechane
    1. Setting up distributed connect clusters
    2. Should containerize or not.
    3. Monitoring
    4. Security
    5. The number of tasks/processes to deploy according to the use case.
    6. Other best-practices
    Jiri Pechanec
    @jpechane
    @avinashb98 Hi, I guess you''l need to elaborate more on the items
    Guy Davenport
    @daveneti
    Hi @jpechane, deleted and re-created the connector to see if I could see a reason why I was getting no changes from the SQL Server Connector. After re-creation I get the error [2019-12-13 03:57:01,692] ERROR WorkerSourceTask{id=e-brida-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
    [2019-12-13 03:57:01,692] INFO Connector has already been stopped (io.debezium.connector.sqlserver.SqlServerConnectorTask). The cause was due to com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset. I think this maybe because we using a log shipped database and it is not maintaining connections long enough.
    ekremtoprak
    @ekremtoprak
    hey @jpechane , back on the SMT to extract the wkb part of geometry. I have no idea where and how to pull this off in the ExtractNewRecordState smt, can u point out where and how? (my programming skills are pretty basic)
    williame
    @williame
    is there a way to get the DDL to go into the topic of the table it is in, rather than in its own topic?
    Romain Gilles
    @romain-gilles-ultra
    Hi Debezium,
    I'm looking at the Debezium Connector for MongoDB it looks great!
    I have a question. Why are you using the oplog and not the Change Streams API?
    And another one : Does this connector works on mongodb atlas?
    Thank you for your help.
    Chris Cranford
    @Naros
    @romain-gilles-ultra The MongoDB connector is compatible with MongoDB all the way back to 3.2 and I believe the Change Streams API wasn't added until 3.6.
    Burak SARP
    @buraksarp
    is there anyone using debezium with patroni regarding Postgres connecter ? if any could you share your experience ?
    Tomer Shaiman
    @tshaiman
    Hi Debezium,
    thanks for the great article at https://debezium.io/blog/2017/09/25/streaming-to-another-database/
    I wonder if the demo will run if "include.schema.changes": "true" will be set ?
    Tomer Shaiman
    @tshaiman
    another question : will delete statements will be applied to the destination db (I'm using jdbc sink connector) if the pk_mode is record_value ?