Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    smart-shore
    @smart-shore
    @udbhav I have the same problem yesterday,and I check the "table.include.list" config, use database.tablename as the full table name and it works
    for exmplae I have a database named epu_bid and I want to connect bd_material table use mysql connect
    the table.include.list config with epu_bid.bd_material
    Alok Kumar Singh
    @alok87
    If the RDS mysql replication fails, need to replace the RDS with a new one. In the case, have to recreate the complete Kafka connect and connector setup. This leads to a lot of time being wasted in recreating the complete thing. Please suggest how to use the existing connector to point to a new RDS without loosing any data.
    Balázs Németh
    @nbali
    Can someone please pinpoint to some kind of documentation - or at least a discussion - what happens if taking a snapshot without locking, and there are some schema changes happening meanwhile? All I can find is that it should be avoided, but not reason itself. Obtaining any kind of lock that is required otherwise isn't an easily doable task in the current environment I'm working with. I have to either justify the extra effort so we can have the locks by explaining what snapshotting without locks could cause, or do it without lock if the issues are known and accepted.
    Sa'ad
    @saadlu

    Hi Everyone,

    We are trying to configure a mongodb debezium connector. Our mongodb replicaset has been configured to allow only TLS Mutual Authentication. We have verified the connection with a mongo client (using --tls --tlsCAFile=<CA-cert-path> --tlsCertificateKeyFile=<client-pem>).

    But now, how do we configure debezium mongo connector to use client and CA certificate to connect to mongodb using TLS Mutual Authentication?

    https://debezium.io/documentation/reference/connectors/mongodb.html#mongodb-connector-properties

    shows a property, mongodb.ssl.enabled, but where do we can specify the client keystore? (or client key and certificate)? or even the truststore?

    Dev Gupta
    @udbhav
    Using Debezium Postgres connector. I've noticed I'm not getting CDC messages for one of my tables. I ran select * from pg_publication_tables and my table was not in the list, is there any thing that would trigger that happening?
    Dev Gupta
    @udbhav
    I ran alter publication dbz_publication add table my_table and I've started getting CDC messages in Kafka, not sure why that particular table wasn't in the publication to begin with
    it was snapshotted
    Was also using 1.3 Alpha, have updated to final, so might've been something with that
    Dev Gupta
    @udbhav
    Ahhhh I have a theory, so I had been testing w/ smaller sets of tables to use CDC with, and I thought all I needed to do to reset the connector to a blank slate was delete the replication slot in Postgres, I must also have needed to delete the publication
    José Arthur Benetasso Villanova
    @azlev
    Hello. I have a PostgreSQL database and debezium plugged on it. Debezium is struggling to keep up when we have a WAL generation spike. How can I increase the WAL consumption speed? How can I determine the task.max and what parameters affects this consumption part?
    jdwrink
    @jdwrink
    Hello, I am experimenting with Debezium Server 1.3 running on Kubernetes (locally on my machine). I am getting this error when I try to start the Pod: The 'database.history.kafka.bootstrap.servers' value is invalid: A value is required It is my understanding that Debezium Server doesn't need Kafka, was I mistaken?
    1 reply
    YAK_1979
    @1979Yak_twitter
    Hello everyone 👋
    I'm getting a parser error on some tables when the dbz connector is started!
    [2020-10-24 16:11:22,147] ERROR [source-debezium-connector|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:31)
    io.debezium.text.ParsingException: no viable alternative at input 'IDNUMBER(4)GENERATEDBY'
        at io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:40)
        at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)
        at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)
        at org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)
        at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:136)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.relational_properties(PlSqlParser.java:48605)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.relational_table(PlSqlParser.java:48296)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.create_table(PlSqlParser.java:46625)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.unit_statement(PlSqlParser.java:2364)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parseTree(OracleDdlParser.java:56)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parseTree(OracleDdlParser.java:31)
        at io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parse(OracleDdlParser.java:51)
        at io.debezium.connector.oracle.BaseOracleSchemaChangeEventEmitter.emitSchemaChangeEvent(BaseOracleSchemaChangeEventEmitter.java:60)
        at io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:263)
        at io.debezium.connector.oracle.xstream.LcrEventHandler.dispatchSchemaChangeEvent(LcrEventHandler.java:116)
        at io.debezium.connector.oracle.xstream.LcrEventHandler.processLCR(LcrEventHandler.java:80)
        at oracle.streams.XStreamOut.XStreamOutReceiveLCRCallbackNative(Native Method)
        at oracle.streams.XStreamOut.receiveLCRCallback(XStreamOut.java:465)
        at io.debezium.connector.oracle.xstream.XstreamStreamingChangeEventSource.execute(XstreamStreamingChangeEventSource.java:78)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:140)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:113)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: org.antlr.v4.runtime.NoViableAltException
        at org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:2026)
        at org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:467)
        at org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:393)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.relational_properties(PlSqlParser.java:48563)
        ... 21 more
    io.debezium.text.ParsingException: mismatched input 'GENERATED' expecting {'AS', ';'}
        at io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:40)
        at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)
        at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)
        at org.antlr.v4.runtime.DefaultErrorStrategy.reportInputMismatch(DefaultErrorStrategy.java:327)
        at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:139)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.create_table(PlSqlParser.java:46659)
        at io.debezium.ddl.parser.oracle.generated.PlSqlParser.unit_statement(PlSqlParser.java:2364)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parseTree(OracleDdlParser.java:56)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parseTree(OracleDdlParser.java:31)
        at io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)
        at io.debezium.connector.oracle.antlr.OracleDdlParser.parse(OracleDdlParser.
    2 replies
    the only topic that is created is server1.DEBEZIUM.PRODUCTS_ON_HAND
    that's the only table that has no char columns
    and this is the latest dbz/oracle connector
    1.3
    ruslan
    @unoexperto
    Hi everyone! Does anyone have problems with generated columns (https://www.postgresql.org/docs/12/ddl-generated-columns.html) ? They're missing from INSERT events of PgOutput plugin.
    1 reply
    Phan Phương Nam
    @namphan16899_gitlab
    hi,
    I'm trying to snapshot ~ 300gb mongodb with debezium
    but after ~ 1 day run time, debezium stop working
    I restarted it
    but debezium keep snapshot the old record instead of new one
    5 replies
    Sergey Savva
    @savva.sergey_gitlab

    Hi, I'm trying to run Debezium in docker (docker image confluentinc/cp-kafka-connect-base:6.0.0) with local setup of kafka and mysql.
    I see that Kafka Connect up and running and /connectors endpoint returns an empty list.
    Then I pass debezium config like this

    curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://localhost:8083/connectors -d '
      {
        "name": "mysql-debezium-connector",
        "config": {
          "connector.class": "io.debezium.connector.mysql.MySqlConnector",
          "database.hostname": "host.docker.internal",
          "database.port": "3306",
          "database.user": "root",
          "database.password": "********",
          "database.server.id": "184057",
          "database.server.name": "mysql-localhost",
          "table.whitelist": "my_schema.my_table",
          "database.history.kafka.bootstrap.servers": "host.docker.internal:9092",
          "database.history.kafka.topic": "dbhistory.dev.my_schema.my_table",
          "include.schema.changes": "false"
        }
      }
    '

    After it I see in logs that the connector is starting, but logs end up with Killed phrase and I don't see other details

    [2020-10-25 16:13:23,309] INFO Starting MySqlConnectorTask with configuration: (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,313] INFO    connector.class = io.debezium.connector.mysql.MySqlConnector (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,315] INFO    database.user = root (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,315] INFO    database.server.id = 184057 (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,317] INFO    database.history.kafka.bootstrap.servers = host.docker.internal:9092 (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,318] INFO    database.history.kafka.topic = dbhistory.dev.my_schema.my_table (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,319] INFO    database.server.name = mysql-localhost (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,321] INFO    database.port = 3306 (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,321] INFO    include.schema.changes = false (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    table.whitelist = my_schema.my_table (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    database.serverTimezone = Europe/Moscow (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    task.class = io.debezium.connector.mysql.MySqlConnectorTask (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    database.hostname = host.docker.internal (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    database.password = ******** (io.debezium.connector.common.BaseSourceTask)
    [2020-10-25 16:13:23,322] INFO    name = mysql-debezium-connector (io.debezium.connector.common.BaseSourceTask)
    Killed

    I'm able to connect to MySQL from the container via mysql cli.
    Any ideas what's going on?

    Phan Phương Nam
    @namphan16899_gitlab
    any debezium support here ?
    ruslan
    @unoexperto
    Does anyone know when debezium project was conceived ?
    1 reply
    timsty
    @timsty
    First off apologies since this question has probably been asked before, but is there any timeframe for the Oracle connector to move from incubating state? And, the documentation states, "Debezium’s Oracle Connector can monitor and record all of the row-level changes in the databases on an Oracle server. Most notably, the connector does not yet support changes to the structure of captured tables (e.g. ALTER TABLE…​) after the initial snapshot has been completed (see DBZ-718). " If you are only interested in using the Oracle connector to detect row level changes is the Oracle connector considered to be a stable release?
    Clayton Boneli
    @claytonbonelli
    I'm having trouble finding where to denormalize the data I will send from Postrges to a destination database. For me, the ideal would be to have the table denormalized in postgres, but unfortunately I don't. It would also be nice if CDC/Debezium worked with [materialized] views, What alternative do I have to denormalize the data? Kafka has KSQL but the amount of streams / layers that I would have to create due to the amount of joins that would have to be made, it seems to me that the performance and complexity will be great on the Kafka side. Do you have any idea if on the Postgres side would you have any way to do this to use with Debezium? I even tested the creation of a new table in postgres with denormalized data, and a trigger to fill that table, but this leads to a complexity in addition to increasing the size of the database. Any help is welcome.
    Vincent-Zeng
    @Vincent-Zeng

    Hi, team.

    I encounter an error as following:

    connect.log.2020-10-27-08:[2020-10-27 08:45:25,015] WARN Renaming whitelisted table live_db.live_host_info to non-whitelisted table live_db._live_host_info_del, this can lead to schema inconsistency (io.debezium.connector.mysql.antlr.listener.RenameTableParserListener:37)
    connect.log.2020-10-27-08:[2020-10-27 08:45:25,016] WARN Renaming non-whitelisted table live_db._live_host_info_gho to whitelisted table live_db.live_host_info, this can lead to schema inconsistency (io.debezium.connector.mysql.antlr.listener.RenameTableParserListener:40)
    connect.log.2020-10-27-08:[2020-10-27 08:45:26,695] ERROR Encountered change event 'Event{header=EventHeaderV4{timestamp=1603759065000, eventType=TABLE_MAP, serverId=100609299, headerLength=19, dataLength=123, nextPosition=360736336, flags=0}, data=TableMapEventData{tableId=36010, database='live_db', table='live_host_info', columnTypes=3, -2, 15, 2, 2, 15, 15, 3, 1, 15, 15, 15, 3, 3, 3, 3, 15, 15, 15, 3, 3, 15, 3, 3, 3, 15, 3, 3, 15, 15, 15, 15, 15, 15, 1, 15, 15, 1, 3, columnMetadata=0, 65072, 765, 0, 0, 300, 90, 0, 0, 300, 765, 765, 0, 0, 0, 0, 300, 30, 90, 0, 0, 765, 0, 0, 0, 48, 0, 0, 765, 96, 765, 765, 300, 300, 0, 48, 48, 0, 0, columnNullability={3, 4, 6, 9, 10, 11, 13, 14, 16, 20, 21, 22, 23, 24, 25, 26, 27, 28, 34, 35, 36, 38}, eventMetadata=null}}' at offset {table_whitelist=live_db.live_host_info,live_db.live_guard_group_member,live_db.host_star_challenge_level,live_db.account_wealth_level,live_db.forum_chat_room_info,live_db.live_host_play_back, ts_sec=1603759065, file=binlog.002459, table_blacklist=null, pos=360736131, database_whitelist=live_db, database_blacklist=null, gtids=1ee07205-f819-11e6-a4aa-44a84221f2e9:1-99279,3f23174a-102e-11ea-9700-506b4b3f89ce:1-3112156460, server_id=100609299, event=1} for table live_db.live_host_info whose schema isn't known to this connector. One possible cause is an incomplete database history topic. Take a new snapshot in this case.
    connect.log.2020-10-27-08:org.apache.kafka.connect.errors.ConnectException: Encountered change event for table live_db.live_host_info whose schema isn't known to this connector
    connect.log.2020-10-27-08:Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table live_db.live_host_info whose schema isn't known to this connector

    for the purpose of add index for table live_db.live_host_info, DBA rename live_db.live_host_info to live_db._live_host_info_del and rename live_db._live_host_info_gho to live_db.live_host_info . Is it cause this error? If that, how can I avoid that? And any suggestion to recover from error now?

    1 reply
    nitinitt
    @nitinitt
    I am getting the error using embedded engine(1.3.0.Final), the same SQL Server connector works just fine when I use it directly in Kafka Connect though Logs:: 2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Metrics registered
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Context created
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: No previous offset has been found
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: According to the connector configuration both schema and data will be snapshotted
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Snapshot step 1 - Preparing
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Snapshot step 2 - Determining captured tables
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Snapshot step 3 - Locking captured tables
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Setting locking timeout to 10 s
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Executing schema locking
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] WARN :: Snapshot was interrupted before completion
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Snapshot - Final stage
    2020-10-26 22:11:01 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] INFO :: Removing locking timeout
    2020-10-26 22:11:02 [debezium-sqlserverconnector-my-app-connector-change-event-source-coordinator] WARN :: Change event source executor was interrupted
    java.lang.InterruptedException: Interrupted while locking table schema_name.dbo.Persons
    at io.debezium.connector.sqlserver.SqlServerSnapshotChangeEventSource.lockTablesForSchemaSnapshot(SqlServerSnapshotChangeEventSource.java:134)
    at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:115)
    at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:63)
    at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:105)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    betterflyshark
    @betterflyshark
    Hello!
    betterflyshark
    @betterflyshark
    Hello ! Could you tell me know how to connect db2? I donot understand the "Prerequisites" in the website.And I donot know how to put tables into capture mode in db2 database. Please help me! Thank you very much!
    Wojciech Bałazy
    @Wbal_gitlab
    @Naros I have used the 1.4.0 but the error remains, namely:
    ERROR Cannot parse statement : update "C##DEBEZIUM"."LOG_MINING_AUDIT" set "LAST_SCN" = '2389420' where "LAST_SCN" = '2389407';, transaction: 050018007B060000, due to the {} (io.debezium.connector.oracle.jsqlparser.SimpleDmlParser:137)
    io.debezium.text.ParsingException: Trying to parse a table 'ORCLPDB.C##DEBEZIUM.LOG_MINING_AUDIT', which does not exist.
        at io.debezium.connector.oracle.jsqlparser.SimpleDmlParser.initColumns(SimpleDmlParser.java:147)
        at io.debezium.connector.oracle.jsqlparser.SimpleDmlParser.parseUpdate(SimpleDmlParser.java:166)
        at io.debezium.connector.oracle.jsqlparser.SimpleDmlParser.parse(SimpleDmlParser.java:108)
        at io.debezium.connector.oracle.logminer.LogMinerQueryResultProcessor.processResult(LogMinerQueryResultProcessor.java:155)
        at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:181)
        ...
    13 replies
    Alok Kumar Singh
    @alok87
    \\u0000 getting these values in case of spaces in debezium. Please suggest why so. cc @jpechane Source has 32 whitespaces mysql> select char_length(street_address) from customers where id=1;
    32
    Alok Kumar Singh
    @alok87
    The table column is a unicode column street_address varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,`
    hukx.michael
    @hukaixuan
    Hello everyone. New to debezium and wonder why debezium MySQL connector Datetime ingore MySQL server Timezone and just take its value in UTC but Timestamp can use actual timezone?
    Okan YILDIRIM
    @okan.yildirim_gitlab

    We use Debezium 1.2.0 on PostgreSQL 11. Debezium works as 3 pod (3 Workers) in K8s. We get error such like that :

    "tasks": [
            {
                "id": 0,
                "state": "FAILED",
                "worker_id": "11.111.11.11:8083",
                "trace": "java.lang.NullPointerException\n\tat org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:103)\n\tat org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)\n\tat org.apache.kafka.connect.runtime.TaskConfig.<init>(TaskConfig.java:51)\n\tat org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:431)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1147)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:126)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1162)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1158)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n"
            }
        ]

    Do you have any idea about it? What can be a main cause of this error?

    Kenni Silverio
    @kennisilverio
    after getting messages in topic, I see that we have an object that contains key value pairs, but one of the keys still holds a json value, how do i target and convert the value?
    1 reply
    Ashwin
    @ashwin027
    Hello everyone. With the debezium sql server connector, when the connector starts up, im seeing the snapshots happening with the select queries in the logs and then just after that throws an error "No table has enabled CDC or security constraints prevents getting the list of change tables". I did enable CDC on the tables listed and I do see that CDC is working. The user provided in the config has dbowner access on the DB. Any idea what this might be? or I can look at? Ive spent a good bit of time on this and I just dont understand what could be causing this.
    2 replies
    ant0nk
    @ant0nk
    Hello, Oracle connector gives me error like this:
    ERROR Producer failure (io.debezium.pipeline.ErrorHandler:31) java.lang.RuntimeException: java.sql.SQLException: ORA-08180: no snapshot found based on specified time ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1 at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:80) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:105) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.sql.SQLException: ORA-08180: no snapshot found based on specified time ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1 at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612) at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:213) at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:37) at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:733) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082) at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1276) at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:366) at io.debezium.connector.oracle.OracleSnapshotChangeEventSource.getLatestTableDdlScn(OracleSnapshotChangeEventSource.java:181) at io.debezium.connector.oracle.OracleSnapshotChangeEventSource.determineSnapshotOffset(OracleSnapshotChangeEventSource.java:113) at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:119) at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:67) ... 6 more Caused by: Error : 8180, Position : 7, Sql = SELECT TIMESTAMP_TO_SCN(MAX(last_ddl_time)) FROM all_objects WHERE (owner = 'ODPP' AND object_name = 'TEST'), OriginalSql = SELECT TIMESTAMP_TO_SCN(MAX(last_ddl_time)) FROM all_objects WHERE (owner = 'ODPP' AND object_name = 'TEST'), Error Msg = ORA-08180: no snapshot found based on specified time ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1 at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:498) ... 22 more
    How to fix it?
    28 replies
    ruettere
    @ruettere
    Hi everyone, I am trying to run the Logminer based oracle connector on OpenShift (Strimzi). The connector is running and the task as well, but there is no topic generated nor data streamed. As a consequence, the connect cluster is failing. In the logs I see:
    2020-10-28 07:28:46,641 INFO Kafka version: 2.5.0 (org.apache.kafka.common.utils.AppInfoParser) [task-thread-my-mtxi-cdc-src-connector-0]
    2020-10-28 07:28:46,641 INFO Kafka commitId: 66563e712b0b9f84 (org.apache.kafka.common.utils.AppInfoParser) [task-thread-my-mtxi-cdc-src-connector-0]
    2020-10-28 07:28:46,641 INFO Kafka startTimeMs: 1603870126641 (org.apache.kafka.common.utils.AppInfoParser) [task-thread-my-mtxi-cdc-src-connector-0]
    There is no error. It's quite hard to debug this. Can anyone help me in this? Or has anyone already got the logminer based oracle connector 1.3 running?
    3 replies
    Phan Phương Nam
    @namphan16899_gitlab
    I have only 1 mongodb replica set
    but my connect pod always restart in an unexpected way
    having 2 debezium task balance on 2 distributed kafka connect, will it stop restarting snapshot phase when 1 kafka connect down ?