Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    yashpreetmsft
    @yashpreetmsft
    image.png
    vbucchar
    @vbucchar
    Hello, I am new to Debezium and am trying to set it up with Oracle 19c using XStreamsAPI. I am receiving the below error when my connector tries to start. I'm using the instant client 21.1 and have copied the jar files to the appropriate folders. I have other connectors working in Oracle fine. Any help would be greatly appreciated.
    12 replies
    nhatdiec
    @nhatdiec
    Hello I followed this tutorial https://aws.amazon.com/vi/blogs/aws/introducing-amazon-msk-connect-stream-data-to-and-from-your-apache-kafka-clusters-using-managed-connectors/, but am getting "Failed testing connection for {} with user '{}'" error when connecting to Aurora MySQL. I tried connect = SQL Workbench, everything is perfectly normal. Can anyone help me?
    This is error from CloudWatchLogs:
    image.png
    Serge Klochkov
    @slvrtrn
    is there any way to remove default and connect.default values of a field from the resulting AVRO schema? I tried writing a custom converter that sets the default value to an empty string, but when I run the producer I am getting a strange exception that the default value has already been set. In other words, something is modifying the resulting schema on top of what I do.
    1 reply
    Christopher Burch
    @cburch824
    Hi, I'm using debezium-server. I figured out some basic logging such as setting the level via a quarkus.log.level environment variable, and limiting category levels via quarkus.log.category."cat.name".level in the application.properties file. How can I set the logging format to json?
    4 replies
    Suiyi Fu
    @fusuiyi123
    Hi, I have questions regarding to restarting the MySqlConnetor. Is it able to restart the MySqlConnector when the DatabaseHistory schema is behind/ahead the binglog offset? From here it seems to be able to restart but needs some clarification :) Thanks!
    3 replies
    kunal-til
    @kunal-til
    Hi, I need help to set up the debezium-server with Mysql/MaraiDB -> Kafka. Not able to find the any example or tutorial.
    1 reply
    Joseph Wibowo
    @josephwibowo
    how do I reset LSN for a postgres debezium connector? Can't find any docs on this
    1 reply
    Kevin T.
    @aeolus811tw
    i'm wondering what's everyone's experience on debezium postgres connector snapshot performance, is it normal that it has only exported 30000 record after 4 hours?
    1 reply
    integratemukesh
    @integratemukesh
    do we need to perform some special handling during failover for oracle connector? I was testing my Oracle connector for debezium with 1.7.0.CR1 and oracle 19, it works well on Primary but when we failover the database, change the CDB name and run the debezium engine - I get the "no log file contains the specified SCN xxxx" message. when I run the query in v$archivedlog, i still see the scn number processed before the failover
    1 reply
    Suresh Sankar
    @ssuresh83
    Hi Team
    10 replies
    we are using confluent platform 5.xx service managed services
    can we use debezium connector with postgres with free of cost in commercial products.
    do we have any implications
    sebasmagri
    @smag:matrix.org
    [m]
    Morning folks, we're looking for paid support for our debezium set up. If anyone would be interested please reach out.
    kunal-til
    @kunal-til
    How stable is debezium server? Should be setup connector or server?
    4 replies
    nhatdiec
    @nhatdiec
    Hello every one,
    Has anyone encountered this error while using Aurora MySQL and debezium plugin?
    [Worker-0fe365ddad87e130f] [2021-09-23 10:35:29,429] ERROR Failed testing connection for jdbc:mysql://debezium-1.cluster-cdvjg8him7ek.ap-southeast-1.rds.amazonaws.com:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'admin' (io.debezium.connector.mysql.MySqlConnector:103)
    image.png
    Vinod Venugopal
    @winod_qmar_twitter

    Hi team, I am using debezium to implement outbox pattern using postgres database and connector. Using "table.fields.additional.placement" configuration option to include multiple headers to the Kafka message. The headers are added as separate fields in the outbox table and using additional.placement option to add it as Kafka message headers. For example: type:header,origin:header,source:header, timestamp:header etc., type, origin, source, timestamp are separate columns in outbox table. It looks like if I need more headers, then more columns need to be added to the table. Is it possible to have 1 column in the table as "headers" and use additional.placement option to extract type,origin,source,timestamp and add them as headers?

    Appreciate if someone can help for the above query

    wojtekma
    @wojtekma:matrix.org
    [m]

    Thanks to the tips from devs, comunity and search enging I was able to progess with Debezium + Oracle 11g XE and local setup. In the morning I was able to see changes in the schema, but after some further tuning I fell into scenario when connector stops.
    When I don't use include.list I got general error:

    [2021-09-23 15:03:50,053] ERROR Mining session stopped due to the {} (io.debezium.connector.oracle.logminer.LogMinerHelper:523)
    java.sql.SQLException: ORA-31603: object "OBJ# 20063" of type TABLE not found in schema "UNKNOWN"
    where GRANT select_catalog_role TO c##dbzuser isn't helping at all.
    Since I can see almost 500 tables of other schemas/users (maybe it's related with XE version) it's natural to go for whitelisting. When I limit debezium scope only to schema/table I got error below:

    Caused by: io.debezium.connector.oracle.logminer.parser.DmlParserException: DML statement couldn't be parsed. Please open a Jira issue with the statement 'insert into "C##DBZUSER"."CONTACTS"

    I assume it might be somehow related to the fact I might change table structure in the meantime? Anyway I can see proper values propagated on kafka-connect (in error log) but obviously nothing in consumer-console topic.

    Any idea what I might be still missing in the config below?

    "config": {
    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
    "tasks.max" : "1",
    "database.server.name" : "5YGK7H2-MOB",
    "database.hostname" : "localhost",
    "database.port" : "1521",
    "database.user" : "c##dbzuser",
    "database.password" : "dbz",
    "database.dbname" : "xe",
    "database.history.kafka.bootstrap.servers" : "localhost:9092",
    "database.history.kafka.topic": "schema-changes.inventory",
    "snapshot.mode":"initial",
    "schema.include.list": "c##dbzuser"

    20 replies
    Vinod Venugopal
    @winod_qmar_twitter
    Hi, I am deploying debezium-connect in Kubernetes for implementing the outbox pattern. Does anyone know if it requires a Persistent volume claim (pvc) ?
    4 replies
    wojtekma
    @wojtekma:matrix.org
    [m]

    :point_up: Edit: Thanks to the tips from devs, comunity and search enging I was able to progess with Debezium + Oracle 11g XE and local setup. In the morning I was able to see changes in the schema, but after some further tuning I fell into scenario when connector stops.
    When I don't use include.list I got general error:

    [2021-09-23 15:03:50,053] ERROR Mining session stopped due to the {} (io.debezium.connector.oracle.logminer.LogMinerHelper:523)
    java.sql.SQLException: ORA-31603: object "OBJ# 20063" of type TABLE not found in schema "UNKNOWN"
    where GRANT select_catalog_role TO c##dbzuser isn't helping at all.
    Since I can see almost 500 tables of other schemas/users (maybe it's related with XE version) it's natural to go for whitelisting. When I limit debezium scope only to schema/table I got error below:

    Caused by: io.debezium.connector.oracle.logminer.parser.DmlParserException: DML statement couldn't be parsed. Please open a Jira issue with the statement 'insert into "C##DBZUSER"."CONTACTS"

    I assume it might be somehow related to the fact I might change table structure in the meantime? Anyway I can see proper values propagated on kafka-connect (in error log) but obviously nothing in consumer-console topic.

    Any idea what I might be still missing in the config below?

    "config": {
    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
    "tasks.max" : "1",
    "database.server.name" : "<name>",
    "database.hostname" : "localhost",
    "database.port" : "1521",
    "database.user" : "c##dbzuser",
    "database.password" : "dbz",
    "database.dbname" : "xe",
    "database.history.kafka.bootstrap.servers" : "localhost:9092",
    "database.history.kafka.topic": "schema-changes.inventory",
    "snapshot.mode":"initial",
    "schema.include.list": "c##dbzuser"

    1 reply
    ncavig-indeed
    @ncavig-indeed
    Hi! I recently migrated our AWS RDS PostgreSQL 13.3 instance to Aurora compatible with pg 13.3. I have debezium source connector io.debezium:debezium-connector-postgres:1.6.1.Final that was working fine on RDS but now doesn't work with Aurora and was wondering if anyone was aware of prior issues? I create the publication using a rds_superuser and configure the source connector to create the publication slot using pgouput (also tried wal2json without success). The publication slot is created fine, the source connector connects fine, no obvious errors, but it simply does not consume anything. I am opening a ticket with AWS but figured I'd also ask here to see if anyone had any ideas. Perhaps noteworthy is when I try to consume locally with pg_recvlogical, I get a "pg_recvlogical: error: could not send replication command "SHOW data_directory_mode": ERROR: must be superuser or replication role to run this operation." error. Also noteworthy is I can create a publication on aurora and configure a subscription on my local db and it consumes fine, so seemingly publication is ok on the aurora side.
    3 replies
    Deepak Bhimaraju
    @deepak-auto

    Hi team, we are trying to enable message compression on our Debezium Kafka producers by setting the env variable CONNECT_PRODUCER_COMPRESSION_TYPE to gzip. However, we noticed that it only impacts some connectors, and while experimenting, we found that setting CONNECT_COMPRESSION_TYPE to gzip enables message compression for the other producers. We were checking if the compression is enabled or not by looking at the ProducerConfig logs that are emitted when we deploy Debezium. Would you happen to know why we are having to set both of them? If so, could you update your docs?

    We are using the Docker image debezium/connect:1.6.1.Final and running it on a Kubernetes cluster.

    Thanks!

    5 replies
    (in case its not clear, the difference between the two env variables is the word PRODUCER)
    Deepak Bhimaraju
    @deepak-auto
    Also, I looked at the connect-distributed.properties file in the config directory (/kafka/config) and see that passing two env variables is setting two properties in the file.
    sh-4.2$ cat connect-distributed.properties | grep gzip
    compression.type=gzip
    producer.compression.type=gzip
    Abhishek Tomar
    @ImAbhishekTomar
    Trying to create CDC with deuterium, kafka, zookeeper and DB2 as a source... Please share if someone has a docker-compose file for this requirement... -Thanks in advance
    1 reply
    Preethish
    @preethishp
    Hey! Once the debezium connector (MySQL) establishes the connection to the DB cluster, how long does the connection stay open for? Is there a setting that we can configure to tune this? Thanks!
    3 replies
    Kit Chen
    @meethigher
    @jpechane How to detect Oracle whether logminer or xStream is enabled? such as MySQL can be accessed through
    show variables like 'log_bin';
    4 replies
    V主宰
    @aib628

    I have weird problem: WorkerSourceTask{id=connector_1-0} failed to send record to bx.invoice: {} [org.apache.kafka.connect.runtime.WorkerSourceTask] org.apache.kafka.common.KafkaException: Producer is closed forcefully.

    I have encountered the same problem here. Has anyone spotted something like this?

    Luigi Cerone
    @LuigiCerone
    Hello everyone I've a question about Debezium for MySQL connector. In my mysqld.cnf file I've enabled binlog (by following istructions https://debezium.io/documentation/reference/0.9/connectors/mysql.html#enabling-the-binlog). As stated in the instuctions, I've setup server-id= 223344. My question is: when I configure the Debezium connector via POST request, should I use in the JSON the same number (i.e. 223344) also for database.server.id? In other words, should these values match or the only thing to care is that they are unique among the clients? Thanks!
    nhatdiec
    @nhatdiec
    Hello everyone,
    I've followed this file for docker-compose
    https://github.com/debezium/debezium-examples/blob/master/tutorial/docker-compose-mysql.yaml
    And I have a question, how to catchup message in topic ?
    Plugaru Tudor
    @PlugaruT

    Hey guys, I'm trying to use a custom snapshot query for one of the tables and I am using application.properties file to configure the debezium connector. I have this:

    debezium.source.snapshot.select.statement.overrides=public.table
    debezium.source.snapshot.select.statement.overrides.public.table=SELECT table.id FROM public.table

    But for some reason, debezium is still using the default query and selects all columns... Any idea why?

    11 replies
    deependerg
    @deependerg
    Hi All, I have a QQ about the Debezium and Oracle DataGuard switchover. After the Oracle swithover to the secondary site, Debezium is not picking it up the changes. Do we need to make any changes in the Debezium configuraton to make it work? Any suggestions?
    3 replies
    Suiyi Fu
    @fusuiyi123
    I have a quick question about https://debezium.io/documentation/reference/1.5/development/engine.html#_handling_failures When the engine executes, its connector is actively recording the source offset inside each source record, and the engine is periodically flushing those offsets to persistent storage. When the application and engine shutdown normally or crash, when they are restarted the engine and its connector will resume reading the source information from the last recorded offset. What is a graceful shutdown, won't we flush the offset only periodically and if there's a shutdown, we would still encounter duplicates right?
    2 replies
    nhatdiec
    @nhatdiec
    image.png
    Hello everyone,
    Why can't I get the message from topic dbserver1.inventory.customers. Does anyone help me ?
    Ibrahim Magdy
    @ebrahimmagdy
    hello everyone, I am trying to create vitess connector on kafka connect using the connect apis but I'm getting request failed. this is my json file where is my error? {
    "name": "vitess-connector",
    "config":{
    "connector.class": "io.debezium.connector.vitess.VitessConnector",
    "database.hostname": "osticket-mysql-vtgate-b1a8bea2.kafka-op.svc.cluster.local",
    "database.port": "15991",
    "vitess.keyspace": "osticket",
    "vitess.database.user": "osticket",
    "vitess.database.password": "password",
    "vitess.vtctld.host": "osticket-mysql-vtctld-6d59e5b1.kafka-op.svc.cluster.local",
    "vitess.vtctld.port": "15999",
    "vitess.vtctld.user": "osticket",
    "vitess.vtctld.password": "password",
    "vitess.tablet.type": "Master",
    "database.server.name": "vitess_logs",
    "table.include.list": "osticket.ost_eventlog",
    "database.history.kafka.bootstrap.servers": "kafka-cluster-kafka-bootstrap:9092",
    "database.history.kafka.topic": "dbeventhistory.eventslog_vitess",
    "database.history.skip.unparseable.ddl": "true",
    "topic.creation.default.partitions": "3",
    "topic.creation.default.replication.factor": "1",
    "include.schema.changes": "false",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "key.converter.schemas.enable": "false",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "value.converter.schemas.enable": "false"
    }
    }
    Ahmed Shehata
    @Ahmed_Shehata_gitlab
    Hello guys, I created this example https://ehsaniara.medium.com/cdc-with-postgres-debezium-to-kafka-strimzi-bf9212ae9d78 and i had everything working and continuet with pinot real time table (everything was working properly) the next day before demo time my cdc change in my tables is not picking up the change even from the kafka consumer not pinot .. I checked the connection status and was ready .. wasted like 3 hrs now and can not figure out the issue .. (I am beginner with debezium and strimzi) but ok with kubernetes .. please help me
    1 reply
    nhatdiec
    @nhatdiec
    Hello everyone,
    I followed the tutorial: https://debezium.io/documentation/reference/tutorial.html#starting-zookeeper. But I can't get message from topic dbserver1.inventory.customers from "localhost:9092".
    Could you please help me ?
    image.png
    wojtekma
    @wojtekma:matrix.org
    [m]

    Hi,

    I would like to understand what is the purpose of privilige GRANT LOCK ANY TABLE TO c##dbzuser for Oracle connector? Is it used only for snapshot creation? It was raised as a potential concern by my DBA's so I was wondering if in snapshot.mode = schema_only scenario is it possible to turn it off? Either by skipping it initailly or switching off after snapshot? Or is there a constant need of having it on? I wonder if there might be any issues caused by the lock.

    1 reply
    Jorn Argelo
    @jornargelo
    Hi all, quick question. On this page: https://debezium.io/documentation/reference/1.7/configuration/signalling.html the Oracle connector is not mentioned but the Oracle connector page does say it supports incremental snapshotting. Is the signalling page out of date?
    2 replies
    Jorn Argelo
    @jornargelo
    Another quick question, is https://issues.redhat.com/browse/DBZ-3712 still in scope for the 1.7 final release?
    3 replies
    Vinod Venugopal
    @winod_qmar_twitter
    Hi All, The postgres debezium connector sometimes get stuck at "Searching for WAL resume position". Here is the log
    2021-09-27 13:09:47,837 INFO Postgres|db_organisation|streaming Searching for WAL resume position [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource]
    2021-09-27 13:10:47,039 INFO || WorkerSourceTask{id=organisation-connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:10:47,041 INFO || WorkerSourceTask{id=organisation-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:11:46,974 INFO || WorkerSourceTask{id=organisation-connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:11:46,974 INFO || WorkerSourceTask{id=organisation-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:12:46,907 INFO || WorkerSourceTask{id=organisation-connector-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:12:46,907 INFO || WorkerSourceTask{id=organisation-connector-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
    2021-09-27 13:12:57,500 INFO Postgres|db_organisation|streaming First LSN 'LSN{0/176A1A8}' received [io.debezium.connector.postgresql.connection.WalPositionLocator]
    2021-09-27 13:12:57,500 INFO Postgres|db_organisation|streaming Received COMMIT LSN 'LSN{0/1773F48}' larger than than last stored commit LSN 'LSN{0/1770B20}' [io.debezium.connector.postgresql.connection.WalPositionLocator]
    2021-09-27 13:12:57,500 INFO Postgres|db_organisation|streaming Will restart from LSN 'LSN{0/176A1A8}' that is start of the first unprocessed transaction [io.debezium.connector.postgresql.connection.WalPositionLocator]
    2021-09-27 13:12:57,500 INFO Postgres|db_organisation|streaming WAL resume position 'LSN{0/176A1A8}' discovered [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource]
    2021-09-27 13:12:57,505 INFO Postgres|db_organisation|streaming Connection gracefully closed [io.debezium.jdbc.JdbcConnection]
    2021-09-27 13:12:57,511 INFO Postgres|db_organisation|streaming Connection gracefully closed [io.debezium.jdbc.JdbcConnection]
    2021-09-27 13:12:57,544 INFO Postgres|db_organisation|streaming Initializing PgOutput logical decoder publication [io.debezium.connector.postgresql.connection.PostgresReplicationConnection]
    2021-09-27 13:12:57,576 INFO Postgres|db_organisation|streaming Processing messages [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource]
    2021-09-27 13:12:58,094 INFO Postgres|db_organisation|streaming Message with LSN 'LSN{0/176A1A8}' arrived, switching off the filtering [io.debezium.connector.postgresql.connection.WalPositionLocator]
    Satish
    @satishkuppam
    Hi All, I am facing below error when starting the sql debezium connector kafka.common.erros.RecordTooLargeException: The message is 2311882 bytes when serialized which is larger than 1048576 which is the value of the max.request.sixe configurations . I have modified producer.properties max.request.size=304857600 and consumer.properties max.partition.fetch.bytes=314857600 and in server.properties added message.max.bytes=304857600
    socket.request.max.bytes=304857600
    replica.fetch.max.bytes=304857600 Can any one please help me on this
    skylines
    @rookiegao
    Hi All, I only saw the description of mysql5.6 for versions 0.10 and below on the official website, and later versions found that the mysql test version was both 5.7 or 8.0. Is the compatibility of the current new version with mysql 5.6 good?
    AZhui
    @AZhui0426

    Hi All, When I use debezium api to monitor sql server, the following error occurred.
    When I modify the name or database.server.name in the configuration information of the connector, this problem can be resolved. But I want to know the reason for this problem and whether it can be solved.

    com.microsoft.sqlserver.jdbc.SQLServerException: 为过程或函数 cdc.fn_cdc_get_allchanges ... 提供的参数数目不足。
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet$FetchBuffer.nextRow(SQLServerResultSet.java:5448)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.fetchBufferNext(SQLServerResultSet.java:1771)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.next(SQLServerResultSet.java:1029)
    at io.debezium.pipeline.source.spi.ChangeTableResultSet.next(ChangeTableResultSet.java:63)
    at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.lambda$execute$1(SqlServerStreamingChangeEventSource.java:180)
    at io.debezium.jdbc.JdbcConnection.prepareQuery(JdbcConnection.java:613)
    at io.debezium.connector.sqlserver.SqlServerConnection.getChangesForTables(SqlServerConnection.java:285)
    at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:170)
    at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:59)
    at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:159)
    at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:122)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
    at java.util.concurrent.FutureTask.run(FutureTask.java)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

    image.png