Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Sam--Shan
    @Sam--Shan
    i set the snapshot.mode to schema_only.it can get the changed recode after my connector started,but the before dml changes didnt send to kafka.
    Nikita Babich
    @loki-dv_gitlab
    Hello, I installed and setup debezium connector to my postgresql database and I used limited list of databases, and publication.autocreate.mode: filtered option. For now I want to extend list of tables, I added new one and as I can see, in the connector config via API, this table was added and field table.include.list successfully updated. But when I checked tables included in publication I see that there are only two old tables, without new one. Should I re-create connector for apply these settings or it is possible to add something to configuration that will be alter/re-create publication when table list changed?
    2 replies
    thanks
    207992
    @207992
    @Naros ,Hello, in my Kafka connector distributed mode, when one of the connectors is stopped, other connectors will be in unassigned status, and then some connectors will report the following problem
    3 replies
    image.png
    Les
    @codeles
    Having an issue with Oracle 1.5.0 connector, Connect is configured with SASL_PLAIN security to the kafka broker, but it looks like the connector is not recognizing it when it uses the "database.history.kafka.bootstrap.servers" property. Keep seeing this over and over in my logs:
    [Producer clientId=xxx] Bootstrap broker xxx:9093 (id: -1 rack: null) disconnected [org.apache.kafka.clients.NetworkClient]
    [Consumer clientId=xxx] Bootstrap broker xxx:9093 (id: -1 rack: null) disconnected [org.apache.kafka.clients.NetworkClient]
    11 replies
    unexp
    @unexp:matrix.org
    [m]
    Hello. Can someone explain please why kafka-connect standalone forces me to enable supplemental log on whole database? I'm using oracle connector and enable supplemental log on two tables in one schema. After starting logminer I'm trying to start connect-standalone.sh and I get an error "connect-standalone.sh[15378]: ERROR WorkerSourceTask{id=debezium-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179)
    connect-standalone.sh[15378]: Caused by: io.debezium.DebeziumException: Supplemental logging not properly configured. Use: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA"
    2 replies
    tnevolin
    @tnevolin

    Hi All.
    Any specifics to connect Kafka behind the load balancer?
    I have deployed Debezium Postgres tutorial docker compose to AWS Fargate. It put Kafka behind the load balancer with a single DNS name and port. This host:port itself is accessible but when I try to list topics I get this.

    I have no name!@320aed83a282:/opt/bitnami/kafka$ bin/kafka-topics.sh --bootstrap-server audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com:9092 --list
    [2021-04-12 16:35:06,512] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/172.31.30.198:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

    And 172.31.30.198 is not the IP broker DNS resolves to.

    nslookup audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com
    Server:  Fios_Quantum_Gateway.fios-router.home
    Address:  192.168.1.1
    
    Non-authoritative answer:
    Name:    audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com
    Addresses:  18.210.98.125
              3.225.138.174
              52.22.188.95
              23.21.51.61

    What should I do?

    1 reply
    Avi Mualem
    @AviMualem
    hey i have an interesting question:
    during a rebalance in kafka connect. will all debezium connectors will flush the bin log position the got to or they will just restart and i should expect more than one messaging in high probability ?
    3 replies
    L Suarez
    @lsuarez5280
    Hi all. I hope you'll forgive a new arrival with minimal exposure, but I'm working to build CDC and sync of a data domain between MongoDB Atlas and SQL Server. I've got a pretty good sense of the uni-directional feed from SQL to Mongo, but I've no shortage of questions about the opposite direction. First: the MongoDB Debezium connector seems to be based on tailing the oplog instead of utilizing a change stream for Mongo 3.6+. Is that correct? Also, I'm fuzzy on a Kafka sink connector for SQL: whether a generic implementation library exists, how I would go about implementing one myself if needed, etc.
    3 replies
    tnevolin
    @tnevolin

    What are best practices running Debezium on AWS? It seems that it poorly works with docker compose to AWS ECS translation. Any articles or hints will be greatly appreciated.

    Specific questions:

    1. Is it better to deploy it as a single docker compose package or use stand-alone Kafka installation and then configure Debezium Kafka Connector to use it?
    2. What is the best AWS service type to host whole things on: EC2, ECS, Fargate, Kubernetis?
    1 reply
    207992
    @207992
    @Naros Excuse me, I currently have 13 connectors. Sometimes I find that the connector is normal, but the data is not captured. After restarting the corresponding connector, it is normal. Why
    1 reply
    Sebastian Knopp
    @sknopp
    hi, I'm thinking about using debezium in my new project. but i was asked by my colleagues if there is any business support available. I'm currently not familiar with RedHat, do they offer sth like that for debezium?
    3 replies
    Lam Tri Hieu
    @lamtrhieu_twitter
    Hi team
    I have a question with update the connector. For now I create a connector with "table.include.list" with value "public.TableA,public.TableB". Now I want to add another table "public.TableC". I have try to edit the connector with Rest api but the new added table is not streamed into kafka topic. I need to delete old connector and create new connector with new name.
    Can I reuse the connector for this scenario or delete old one and create new connector is the right way to do ?
    Thanks
    2 replies
    Sanjeev Singh
    @sanjeevhbti45_twitter
    Please help in Kafka Connector https://debezium.io/documentation/reference/connectors/mysql.html#mysql-update-events
    can we set micro second values in debezium kafka connector in json field
    1 reply
    iSerganov
    @iSerganov

    Hello Team!
    We have 2 SQL servers running as always on cluster with Debezium 1.2.5 reading data from replica server.
    Sometimes when connector starts we can see only the following entries in the log:
    [2021-04-12T20:57:13.788297973Z 2021-04-12T20:57:13.788+00:00 INFO || Creating connector CONNECTOR-NAME of type io.debezium.connector.sqlserver.SqlServerConnector [org.apache.kafka.connect.runtime.Worker] 2021-04-12T20:57:13.790568560Z 2021-04-12T20:57:13.790+00:00 INFO || Instantiated connector CONNECTOR-NAME with version 1.2.5.Final of type class io.debezium.connector.sqlserver.SqlServerConnector [org.apache.kafka.connect.runtime.Worker] 2021-04-12T20:57:13.848842136Z 2021-04-12T20:57:13.848+00:00 INFO || Finished creating connector CONNECTOR-NAME [org.apache.kafka.connect.runtime.Worker]
    and Debezium doesn't try to fetch data from change table. Though the latter is being updated with the new records and, which is more confusing, - the connector status is RUNNING.
    When we restart the pod with Debezium it starts working fine and performs expected select queries as usual.
    The other Debezium instance which runs against a single SQL server (not always on cluster) never suffers this issue.

    *Connector config has "database.applicationIntent": "ReadOnly" option set.

    Could you please advise whether there are any integration issues with MSSQL Always On clusters?

    1 reply
    Louis Page
    @lbpage

    Receiving an error from aws MSK when starting debezium/connect container (no tasks running) :

    connect_1  | 2021-04-13 17:40:39,825 INFO   ||  [Worker clientId=connect-1, groupId=1] SyncGroup failed: The coordinator is not available. Marking coordinator unknown. Sent generation was Generation{generationId=11, memberId='connect-1-bf4841a9-4201-4148-8cb4-d30e36f43483', protocol='sessioned'}   [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
    connect_1  | 2021-04-13 17:40:39,825 INFO   ||  [Worker clientId=connect-1, groupId=1] Group coordinator XXXX (id: 2147483645 rack: null) is unavailable or invalid due to cause: error response COORDINATOR_NOT_AVAILABLE.isDisconnected: false. Rediscovery will be attempted.   [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
    connect_1  | 2021-04-13 17:40:39,825 INFO   ||  [Worker clientId=connect-1, groupId=1] Rebalance failed.   [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
    connect_1  | org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available.
    connect_1  | 2021-04-13 17:40:39,925 INFO   ||  [Worker clientId=connect-1, groupId=1] Discovered group XXXX (id: 2147483645 rack: null)   [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]

    Anyone else encounter this issue when starting a connector to aws MSK?

    Sainath A
    @BunnyMan1

    Hi @Naros. I am using Debezium for Oracle (v11.2.0.4) and tried out the new v1.5. All our table names are case sensitive and are in CAPS. Eg: "SCHOOLSCHEMA.STUDENTS".

    It was fine in v1.4 using with "database.tablename.case.insensitive": false (default).
    However, in v1.5 I see no such option and the connector throws an error saying like:
    'SCHOOLSCHEMA.students' has no supplemental logging configured. Execute <supplemental_sql_statement> on the table. even though the logging is enabled on (ALL) columns.
    Notice how in the error, 'students' table name is in small case in the error line of the log.

    Any recommendation on how to make sure the table name stays in CAPS (I need it to be case sensitive) in Debezium v1.5 ?
    I know you guys don't mention support for Oracle 11g explicitly but I really want to use 1.5 with the new property "snapshot.select.statement.overrides".

    Btw, a big thanks to all the developers of this amazing tool.

    Chris Cranford
    @Naros
    @BunnyMan1 That option still exists but it's deprecated, and its resolved explicitly by the database version. Since you're on Oracle 11, that explains the problem. Simply set database.tablename.case.insensitive=false in the connector configuration manually should resolve the problem. In 1.6 I'm planning to remove this entirely and this should no longer be a problem.
    ant0nk
    @ant0nk
    Hi. After unplanned db disconnection the SQL Server connector restarted automatically but after that it does not replicate new changes. One more manual restart of connector didnt help. Status is running but no new messages. What could be the problem?
    1 reply
    Chris Cranford
    @Naros
    Btw, @BunnyMan1 can you explain what you mean by table names are case sensitive wrt Oracle 11?
    How did you explicitly set them to case sensitive besides creating them using CREATE TABLE "MYTABLE" ... where the table name is double-quoted?
    By default, Oracle tables are created in upper-case and are considered case insensitive unless you use the above syntax to my knowledge.
    Sainath A
    @BunnyMan1

    Yes I have set it to database.tablename.case.insensitive=false while using the v1.5 connector. However, the table name is being converted to small case and throws the supplemental logging not enabled error.
    This error doesn't happen in v1.4 though.

    As for being case sensitive in Oracle 11, my bad, they might not be case sensitive as I've said. However I thought that they might be case sensitive as the issue comes back in v1.4 when I set database.tablename.case.insensitive=true. As far as I'm thinking, this option is not being respected in v1.5?

    Sainath A
    @BunnyMan1

    Apologies. I may have made a mistake while testing it out in v1.5.

    I have removed the containers and tried again with v1.5 and setting database.tablename.case.insensitive=false and it works as expected with my table names being in CAPS.
    Sorry for raising a false case. Really appreciate your effort in replying quickly. Thank you!

    Note: Don't know if this is relevant but between previous testing and current successful testing, I have removed the property "database.oracle.version": "11". Would that property have made a difference?

    Joe Troia
    @dirtydupe

    Hello. I'm hoping a new user can get a little help. I am going through the Debezium Engine documentation and trying out the sample code in my Java application. It seems as though the engine runs fine and connects to the MySQL database. Upon updating the database via my client application I get the following error:

    io.debezium.DebeziumException: Error processing binlog event at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:369) at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1118) at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:966) at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:606) at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:850) at java.lang.Thread.run(Thread.java:748) Caused by: io.debezium.DebeziumException: Encountered change event for table dex.parts whose schema isn't known to this connector at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.informAboutUnknownTableIfRequired(MySqlStreamingChangeEventSource.java:647) at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleUpdateTableMetadata(MySqlStreamingChangeEventSource.java:627) at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:352) ... 5 more [blc-localhost:3306] INFO io.debezium.connector.mysql.MySqlStreamingChangeEventSource - Error processing binlog event, and propagating to Kafka Connect so it stops this connector. Future binlog events read before connector is shutdown will be ignored. [pool-2-thread-1] INFO io.debezium.connector.common.BaseSourceTask - Stopping down connector [debezium-mysqlconnector-dex-change-event-source-coordinator] INFO io.debezium.pipeline.ChangeEventSourceCoordinator - Finished streaming [blc-localhost:3306] INFO io.debezium.connector.mysql.MySqlStreamingChangeEventSource - Stopped reading binlog after 0 events, no new offset was recorded [pool-3-thread-1] INFO io.debezium.jdbc.JdbcConnection - Connection gracefully closed [pool-2-thread-1] INFO org.apache.kafka.connect.storage.FileOffsetBackingStore - Stopped FileOffsetBackingStore [pool-2-thread-1] ERROR io.debezium.embedded.EmbeddedEngine - Error while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector' org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped. at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42) at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:369) at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1118) at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:966) at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:606) at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:850) at java.lang.Thread.run(Thread.java:748) Caused by: io.debezium.DebeziumException: Error processing binlog event ... 6 more Caused by: io.debezium.DebeziumException: Encountered change event for table dex.parts whose schema isn't known to this connector at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.informAboutUnknownTableIfRequired(MySqlStreamingChangeEventSource.java:647) at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleUpdateTableMetadata(MySqlStreamingChangeEventSource.java:627) at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:352) ... 5 more

    Any help would be greatly appreciated. I will post my configuration settings as I've run out of room here.

    6 replies
    Sam--Shan
    @Sam--Shan
    hi all,how TimestampConverter support Nullable(Dateime) or datetime column is null
    Sainath A
    @BunnyMan1

    Hi @Naros. My Oracle connector doesn't seem to work when I have same table name under 2 different schemas included in the table.include.list property. Here is the connector config:

    {
        "connector.class": "io.debezium.connector.oracle.OracleConnector",
        "tasks.max": "1",
        "database.server.name": "<server_name>",
        "database.hostname": "<server_ip>",
        "database.port": "1521",
        "database.user": "<db_user>",
        "database.password": "<db_password>",
        "database.dbname": "ORCL",
        "database.connection.adapter": "logminer",
        "database.history.kafka.bootstrap.servers": "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory",
        "database.tablename.case.insensitive": false,
        "schema.include.list": "COLLEGENEW,SCHOOLNEW,MASTERS",
        "table.include.list": "MASTERS.M_DISTRICT_MT,MASTERS.M_REGION_MT,SCHOOLNEW.T_COURSE_MT,COLLEGENEW.T_COURSE_MT"
    }

    We have table named 'T_COURSE_MT' in both schemas of School & College. If I include only one of the table, it works fine.
    However, when I include both, it shows DB Connection error Failed to resolve Oracle database version.

    I have triple checked this by in all combinations of tables, schemas and the issue is occurs only when the same table name is same under 2 schemas.
    Debezium v1.5, Oracle v11.2.0.4

    Is there anything I might not be seeing or could it really be an issue?

    4 replies
    Chris Cranford
    @Naros
    @BunnyMan1 Can you share the full stack trace of the exception?
    That seems unusually odd that table.include.list would influence the database version resolution logic
    Sainath A
    @BunnyMan1
    I'm using 2 connectors now with repeating table names split between the 2.
    Chris Cranford
    @Naros
    @BunnyMan1 Could you open a jira for htis so we don't loose sight of it and try to investigate if this happens on Oracle 12 or later?
    We don't have an Oracle 11 instance to test against so I'm not sure if its explicit to Oracle 11 or not.
    Sainath A
    @BunnyMan1
    Sure. Will open the issue on Jira. Thank you.
    Sainath A
    @BunnyMan1

    In Oracle connector, for field_type VariableScaleDecimal, I am getting the value as {"scale":0,"value":"Pg=="} where the column type in source Oracle DB is NUMBER without any length or decimal scale like NUMBER(4,0).

    So the issue is the 'value' is not being picked up correctly. I got "Pg==" as the value where the column had the number 62 for that given row.

    Debezium v1.5, Oracle v11.2.0.4

    2 replies
    raphaelauv
    @raphaelauv

    Hello I have a postgres connector 1.4.1

    on delete events all the fields of "before" are at null or zero

    do you guys already seen this ?

    {
      "before": {
        "uuid": "8d909856-389c-4937-9ea1-6cd70d29ea5c",
        "invoice_reference": "",
        "status": "",
        "create_time": 0,
        "update_time": 0,
        "seller_id": 0,
        "invoice_year": 0,
        "invoice_month": 0,
        "invoice_issue_date": 0,
        "currency": "",
        "total_amount_vat_excl": 0,
        "total_vat_amount": 0,
        "total_discount_amount": null,
        "total_amount_vat_incl": 0,
        "payout_ref": null,
        "invoice_payment_time": null,
        "invoice_due_date": 0,
        "invoice_number": 0,
        "replaced_invoice_uuid": null,
        "visible": null
      },
      "after": null,
      "source": {
        "version": "1.4.1.Final",
        "connector": "postgresql",
        "name": "ms_XXXXXX",
        "ts_ms": 1617726355468,
        "snapshot": "false",
        "db": "ms_finance",
        "schema": "ms_XXXXXXXe",
        "table": "invoice",
        "txId": -641935421,
        "lsn": 9868558295080,
        "xmin": null
      },
      "op": "d",
      "ts_ms": 1617726356276,
      "transaction": null
    }
    1 reply
    ant0nk
    @ant0nk
    Hi. Is it normal that when connector prepares for or performs snapshot on huge database then REST interface of Kafka Connect becomes unresponsive? Maybe snapshot's tasks should be performed in separate threads? I'm seeing this at least with Oracle sources.
    5 replies
    integratemukesh
    @integratemukesh
    From the documentation - i see snapshot.mode = initial_only is supported only for sql server and postgres. is this mode also supported by Oracle and MySQL? if not, is there a work around to stop the engine after the snapshot is completed?
    Chris Cranford
    @Naros
    @integratemukesh Unfortunately snapshot modes aren't consistent yet across all connectors. It's a very easy thing to implement and there might be a Jira to do this if you have the time to contribute it.
    4 replies
    mhv13589
    @mhv13589
    Can transaction markers or the source record output return commit id (xid) from mysql binlogs? - Using Debezium Embedded with mysql connector. I am able to get transaction marker using Debezium 1.5 but that does not have the xid.
    shabinak
    @shabinak
    Hi, We have set up Oracle LogMiner connector (1.5.0 Final) and when we did a mass update on a table with 2 million+ records , the connector wasn't emitting changes, but there was no error reported. Has anyone encountered this issue ?
    3 replies
    ketan96
    @ketan96:matrix.org
    [m]
    I am running Debezium with AWS MSK cluster. I started Kafka Connect in a docker container and added MYSQL connector. When Debezium starts capturing snapshot, it is capturing database changes but it is very slow. The snapshot is running since 1 week and not yet completed and it is only capturing 9 database changes per minutes. Can this speed be increased?
    4 replies
    Ricardo Ferreira
    @rnferreira

    Hello everybody, does anyone get the following message while trying to setup a Testcontainers test with a PostgresSQL database?

    Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for log output matching '.*Session key updated.*'

    The error causes the test to fail and the most annoying part is that it's intermittent, ultimately making the test suite flaky.
    I'm using the debezium-testing-testcontainers:1.5.0.Final dependency.

    2 replies
    Alonisser
    @alonisser

    Mongodb debezium connector, very long snapshot with high memory consumption
    We recently tried to do a new debezium mongodb sync of our growing mongo db. we experienced two connected problems

    1. Memory consumpations of the kafka-connect worker running the tasks sky rocketed
    2. When the worker eventually died (due to OOM) the snapshot restarted from the beginning

    While 2. is expected from the docs, I do wonder about 1. AND if there is a way to parallelize the snapshot - while it's a single collections (I do know about max.threads configuration but if I understand correctly this is about parallelizing different collections snapshots)

    2 replies
    tony0021074
    @tony0021074
    Hi all. I would like to send change event from mysql to google pub/sub. Debezium server seems able to do this. Due to project requirement, I would like to know
    1. Does Debezium Server support column exclude?
    2. Does Debezium Server support custom transform, filter & routing ?
    3 replies
    Shi Jin
    @jinzishuai
    Hi there, is there any way in the debezium mysql connector configuration to specify the retention policy for all its topics? I don't want to change the retention at the Kafka cluster level or the individual topic level. It would be amazing if this can be done at the connector level.
    Jark Wu
    @wuchong
    Hi there, does debezium support snapshot mysql table in parallel? I found it took me days to finish the snapshot.
    Vadym Kovalenko
    @vadyakun_twitter
    Hi.
    What are defaults for connect heap property values?
    And what env. variable I should use to override heap values?
    HEAP_OPTS or KAFKA_HEAP_OPTS or CONNECT_KAFKA_HEAP_OPTS ?
    2 replies
    Alok Kumar Singh
    @alok87
    org.apache.kafka.connect.errors.ConnectException: extraneous input '@10.2.1.238' expecting {<EOF>, '--'} Please suggest how to go about fixing this. The connector is just not coming up all of a sudden due to this.
    Alok Kumar Singh
    @alok87
    Caused by: io.debezium.text.ParsingException: extraneous input '@10.2.1.238' expecting {<EOF>, '--'} at io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:40) at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41) at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544) at org.antlr.v4.runtime.DefaultErrorStrategy.reportUnwantedToken(DefaultErrorStrategy.java:377) at org.antlr.v4.runtime.DefaultErrorStrategy.singleTokenDeletion(DefaultErrorStrategy.java:548) at org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:266) at io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:887) at io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:68) at io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:41) at io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80) at io.debezium.connector.mysql.MySqlSchema.applyDdl(MySqlSchema.java:326) at io.debezium.connector.mysql.BinlogReader.handleQueryEvent(BinlogReader.java:807) at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:587)
    Anyone faced this?
    207992
    @207992
    @Naros due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll
    .interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing
    the maximum size of batches returned in poll() with max.poll.records.The change parameters I prompted did not change