Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Chris Cranford
    @Naros
    So in the event that the batch size configuration is too small to support a given transaction boundary using re-mining, its possible that we hit a point where the end scn doesn't scale upward enough and so the connector gets caught in an infinite loop.
    For example, I ran the OracleClobDataTypeIT in master using min-batch 10, default batch 20, max batch 60 and the connector entered an infinite loop.
    If I simply let LogMineHelper#getEndScn() return the current scn from the database, the infinite loop was avoided since we allowed the mining range to scale as needed.
    I think since startScn doesn't change like it previously did, i.e. it may stay the same for a period of time while we determine when its safe to advance it, the adaptive window concept just doesn't fit with what we have to do in order to support BLOB / CLOB.
    I'm curious what your thoughts are, what you think might be the best alterative here if one should exist besides replacing that call as I've done.
    @gunnarmorling @jpechane This is probably something else we should discuss this week as well, since I think this is critical for 1.6.
    integratemukesh
    @integratemukesh

    hi, i am getting the following error on postgres 12 in streaming mode. I am using 1.6.0.Beta2. I saw there were some enhancements on default value and checking if this error is a regression issue of that enhancement.

    { [-]
    exception: org.apache.kafka.connect.errors.DataException: Invalid Java object for schema type INT32: class java.lang.String for field: "null"
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:245)
    at org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:213)
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:129)
    ... 28 common frames omitted
    Wrapped by: org.apache.kafka.connect.errors.SchemaBuilderException: Invalid default value
    at org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:131)
    at io.debezium.relational.TableSchemaBuilder.addField(TableSchemaBuilder.java:374)
    at io.debezium.relational.TableSchemaBuilder.lambda$create$0(TableSchemaBuilder.java:110)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
    at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085)
    at io.debezium.relational.TableSchemaBuilder.create(TableSchemaBuilder.java:109)
    at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:130)
    at io.debezium.relational.RelationalDatabaseSchema.refreshSchema(RelationalDatabaseSchema.java:204)
    at io.debezium.relational.RelationalDatabaseSchema.refresh(RelationalDatabaseSchema.java:195)
    at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.synchronizeTableSchema(PostgresChangeRecordEmitter.java:148)
    at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.emitChangeRecords(PostgresChangeRecordEmitter.java:90)
    at io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:217)
    ... 17 common frames omitted
    Wrapped by: org.apache.kafka.connect.errors.ConnectException: Error while processing event at offset {transaction_id=null, lsn_proc=7741466136, lsn_commit=7741466136, lsn=7743224784, txId=41160789, ts_usec=1623785036490640}

    2 replies
    ChapeauClaque
    @ChapeauClaque
    ChapeauClaque
    @ChapeauClaque
    integratemukesh
    @integratemukesh
    with Oracle 19c and oracle engine 1.6.0.Beta2, I am getting the missing log file error which was fixed in 1.4.1.Final. My DBA confirmed that the file that contains the SCN exists on the server. The error occurs when oracle engine switches to streaming mode after snapshot. Please let me know if you need additional details about this.

    {"exception":"oracle.jdbc.OracleDatabaseException: ORA-01291: missing log file\nORA-06512: at \"SYS.DBMS_LOGMNR\", line 72\nORA-06512: at line 1\n\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:513)\n\t... 25 common frames omitted\nWrapped by: java.sql.SQLException: ORA-01291: missing log file\nORA-06512: at \"SYS.DBMS_LOGMNR\", line 72\nORA-06512: at line 1\n\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509)\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:461)\n\tat oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1104)\n\tat oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:550)\n\tat oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:268)\n\tat oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:655)\n\tat oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:265)\n\tat oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:86)\n\tat oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:965)\n\tat oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1205)\n\tat oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3666)\n\tat oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1358)\n\tat oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3778)\n\tat oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4251)\n\tat oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1081)\n\tat io.debezium.connector.oracle.logminer.LogMinerHelper.executeCallableStatement(LogMinerHelper.java:670)\n\tat io.debezium.connector.oracle.logminer.LogMinerHelper.startLogMining(LogMinerHelper.java:219)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:177)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:63)\n\tat
    io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:159)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:122)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n","level":"ERROR","logger":"io.debezium.pipeline.ErrorHandler","thread":"debezium-oracleconnector-trvxqa_DBO-change-event-source-coordinator","message":"Producer failure","mdc":{"dbz.connectorName":"trvxqa_DBO","dbz.connectorType":"Oracle","dbz.connectorContext":"streaming"},"timestamp":"2021-06-16T03:49:28.431Z"}
    Chris Cranford
    @Naros
    @integratemukesh I full trace log that leads up to the exception would be helpful. The connector dumps a lot of state about your db when this happens to the logs.
    If we could be provided that it might help us diagnose the problem. A lot has had to change to support Oracle RAC and standalone, so perhaps knowing which env you're on would help too.
    2 replies
    Sa Pham
    @greatbn
    Hi all, I setup debezium to synchronize database from SQL Server to Postgresql. The data are synced ok. but when I delete a record in source, that record is still exist in Postgres. My configuration for sink like this
    {
        "name": "test4-sink",
        "config": {
            "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
            "tasks.max": "1",
            "topics": "database.dbo.test4",
            "connection.url": "jdbc:postgresql://10.20.1.194:5432/xx?user=xx&password=xxx",
            "transforms": "unwrap",
            "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
            "auto.create": "true",
            "insert.mode": "upsert",
            "pk.mode": "record_key",
            "delete.enabled": "true",
            "pk.fields": "id"
        }
    }
    张义超
    @yichao0803
    Hello everyone, I am using mysql connect to capture the following error message. Do you have any solutions?
        io.debezium.DebeziumException: The connector is trying to read binlog starting at SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000003, currentBinlogPosition=154, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery=null, tableIds=[], databaseName=null], but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed.
        at io.debezium.connector.mysql.MySqlConnectorTask.validateSnapshotFeasibility(MySqlConnectorTask.java:329)
        at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:98)
        at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)\n
    Yago Riveiro
    @yriveiro
    Hi!, anyone using SQLServer connector got a negative number for MilliSecondsBehindSource
    2 replies
    ketan96
    @ketan96:matrix.org
    [m]
    I am trying to add new database in database.include.list and new table in table.include.list to an already MYSQL connector. I have also set snapshot.new.tables to parallel. Still, the connector stops with the error:
    2021-06-16 09:58:25,413 ERROR MySQL|casaone|binlog Error during binlog processing. Last offset stored = null, binlog reader near position = mysql-bin.000109/11899 [io.debezium.connector.mysql.MySqlStreamingChangeEventSource]
    2021-06-16 09:58:25,414 ERROR MySQL|casaone|binlog Producer failure [io.debezium.pipeline.ErrorHandler]
    io.debezium.DebeziumException: Error processing binlog event
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:369)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1118)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:966)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:606)
    at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:850)
    at java.base/java.lang.Thread.run(Thread.java:834)
    Caused by: io.debezium.DebeziumException: Encountered change event for table 'tablename' whose schema isn't known to this connector
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.informAboutUnknownTableIfRequired(MySqlStreamingChangeEventSource.java:647)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleUpdateTableMetadata(MySqlStreamingChangeEventSource.java:627)
    at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:352)
    ... 5 more
    What can be the possible issue?
    1 reply
    Tobias Kurzydym
    @tkurzydym

    Hi everyone!
    I'm trying to get the debezium-quarkus-outbox running with the CloudEventsConverter like described here https://debezium.io/documentation/reference/integrations/cloudevents.html.
    With a JsonConverter it was working and the messages were sent to kafka by debezium.
    When I switch to the CloudEventsConverter I get a NullpointerException because it tries to read the schema name which appears to be null.

    I am not sure, if I made a mistake in my configuration or missing something in the implementation of my service. Has someone stumbled across that Issue already?

    This is the configuration i used in the connector:

            value.converter=io.debezium.converters.CloudEventsConverter
            value.converter.serializer.type=json

    I tried already to switch the various schemas.enable config params to false or true, but without success.

    I am using debezium-quarkus-outbox - 1.6.0.Beta1 and debezium-connector-postgres/1.6.0.Beta2

    I attached a Stack Trace as a reply in this thread.

    1 reply
    ChapeauClaque
    @ChapeauClaque
    Why can endScn be greater than startScn?
    Caused by: Error : 1281, Position : 0, Sql = BEGIN sys.dbms_logmnr.start_logmnr(startScn => '11119535681488', endScn => '11119535678168', OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING  + DBMS_LOGMNR.NO_ROWID_IN_STMT);END;, OriginalSql = BEGIN sys.dbms_logmnr.start_logmnr(startScn => '11119535681488', endScn => '11119535678168', OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING  + DBMS_LOGMNR.NO_ROWID_IN_STMT);END;, Error Msg = ORA-01281: SCN range specified is invalid
    ORA-06512: at "SYS.DBMS_LOGMNR", line 58
    ORA-06512: at line 1
    37 replies
    ChapeauClaque
    @ChapeauClaque
    And we got this bug:
    [2021-06-16 14:34:47,523] INFO 31 records sent during previous 00:01:46.5, last recorded offset: {commit_scn=11119536151306, transaction_id=05000000e17a1200, transaction_data_collection_order_ABCDF2.ABCD_SHARD_1_3.D_LOT_VERSION=1, transaction_data_collection_order_ABCDF2.ABCD_SHARD_1_3.D_LOT_STATUS_HISTORY=1, transaction_data_collection_order_ABCDF2.ABCD_SHARD_1_3.D_LOT_ENTITY=1, scn=11119536148143} (io.debezium.connector.common.BaseSourceTask:182)
    [2021-06-16 14:34:48,508] ERROR Mining session stopped due to the {} (io.debezium.connector.oracle.logminer.LogMinerHelper:494)
    java.sql.SQLException: ORA-01291: missing logfile
    ORA-06512: at "SYS.DBMS_LOGMNR", line 58
    ORA-06512: at line 1
    
            at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)
            at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446)
            at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054)
            at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623)
            at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
            at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
            at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:223)
            at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:56)
            at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:907)
            at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1119)
            at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3780)
            at oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1300)
            at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3887)
            at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4230)
            at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1079)
            at io.debezium.connector.oracle.logminer.LogMinerHelper.executeCallableStatement(LogMinerHelper.java:670)
            at io.debezium.connector.oracle.logminer.LogMinerHelper.startLogMining(LogMinerHelper.java:219)
            at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:177)
            at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:63)
            at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:159)
            at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:122)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    Caused by: Error : 1291, Position : 0, Sql = BEGIN sys.dbms_logmnr.start_logmnr(startScn => '11119536151306', endScn => '11119536176209', OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING  + DBMS_LOGMNR.NO_ROWID_IN_STMT);END;, OriginalSql = BEGIN sys.dbms_logmnr.start_logmnr(startScn => '11119536151306', endScn => '11119536176209', OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING  + DBMS_LOGMNR.NO_ROWID_IN_STMT);END;, Error Msg = ORA-01291: missing logfile
    ORA-06512: at "SYS.DBMS_LOGMNR", line 58
    ORA-06512: at line 1
    
            at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:498)
    19 replies
    王连松
    @wangliansong
    @wangliansong
    Hi ,when I use oracle connector 1.5 and oracle 11g ---> "No metadata registered for captured table"
    This is my config
    curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors -d '{ "name": "connector-oracle-test", "config": { "connector.class" : "io.debezium.connector.oracle.OracleConnector", "database.oracle.version":"11", "tasks.max" : "1", "database.server.name" : "oracle-cdc-test", "database.hostname" : "***", "database.port" : "1521", "database.user" : "**", "database.password" : "123456", "database.dbname" : "REALTIME", "database.tablename.case.insensitive":"true", "database.out.server.name" : "dbzxout_orc_test", "schema.include.list":"oracletest", "database.history.kafka.bootstrap.servers" : "localhost:9092", "database.history.kafka.topic": "bigdata.oracle.test", "database.connection.adapter":"xstream" } }'
    who can help me ,thx
    1 reply
    flyingwww1128
    @flyingwww1128
    image.png
    7 replies
    The version of my debezium is 1.6.0.Beta2, and the version of oracle is 19.3. There are some problems as follows, anyone can help me?
    Mridul.pant2
    @Mridulpant2_twitter
    Hi
    I am able to do CDC with debezium and mongo ,kafka in local using docker,
    but i have to do the same with mongo prod env(cluster) and kafka,debezium are inside docker as image.
    How to connect separate mongodb cluster with debezium connector inside docker.
    Apologies if this is answered before, i am new into this.
    JapuDCret
    @JapuDCret
    This message was deleted
    2 replies
    JapuDCret
    @JapuDCret

    When I try to use the Outbox-Pattern with an CloudEventTransformer, I get a NPE

    org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:196)
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:314)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:340)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
      at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
      at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
      at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
      at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
      at java.base/java.lang.Thread.run(Thread.java:834)
    Caused by: java.lang.NullPointerException
      at io.debezium.data.Envelope.isEnvelopeSchema(Envelope.java:378)
      at io.debezium.data.Envelope.isEnvelopeSchema(Envelope.java:386)
      at io.debezium.converters.CloudEventsConverter.fromConnectData(CloudEventsConverter.java:185)

    my debezium-connector config can be found in the thread

    10 replies
    Drugoy
    @Drugoy

    Hi. These docs describe pk.fields for sink connector.
    How to make it work with a kafka topic with NO metadata at all (message key looks like {"myid":"53453"} and message value looks like {"myid":"53453","some_other_column":"some_value"}?

    I tried to specify pk.mode: record_key + pk.fields: myid, but this doesn't seem to work, getting error
    ... table 'mytable' is RECORD_KEY with configured fields [myid], but record key schema does not contain field: myid.
    Well, duh, the I've stripped both key and message from schemas.

    EDIT: that's weird, I just re-configured source connector so that messages keys have schemas and sink adapter still barks the same error.

    4 replies
    Drugoy
    @Drugoy
    These docs mention this about Sink connector:
    Data mapping
    The sink connector requires knowledge of schemas, so you should use a suitable converter, for example, the Avro converter that comes with Schema Registry, or the JSON converter with schemas enabled. Kafka record keys if present can be primitive types or a Connect struct, and the record value must be a Connect struct. Fields being selected from Connect structs must be of primitive types. If the data in the topic is not of a compatible format, implementing a custom Converter may be necessary.
    yuristpsa
    @yuristpsa

    Hi All,

    I'm trying to create a routine to delete archivelogs that have already been read by debezium.

    I wonder if this implementation makes sense for Debezium:

    1. Every time write last commit_scn to a database table
    debezium_connect_offsets.png
    1. Procedure will find sequence of archivelog where last commit_scn is, all archivelogs with smaller sequence will be deleted.
    pr_create_archivelog_deletion_job.png
    deletion_archivelog_job.png
    Chris Cranford
    @Naros
    Hi @yuristpsa I saw this in my inbox just a few moments ago :).
    So in general I think this isn't a bad strategy, at least from the PL/SQL perspective.
    It's kinda cool seeing how you're able to take the offset metadata and use it as a pruning mechanism for the Oracle archive logs.
    But I do question the use of commit_scn here, let me explain.
    Whenever a transaction commit is detected, that's when the connector updates the commit_scn; however that does not necessarily mean that anything prior to that is safe to discard.
    There could be a long running transaction for example with a SCN that comes before commit_scn and so using it as a basis is likely going to create issues.
    I think however, you could use scn instead and that should work nicely.
    In your data it doesn't seem you've got any long running transactions happening, so you're seeing the ideal scenario where scn is 1 behind commit_scn.
    But in practice that scn value could easily be farther in the past since we can only advance scn once we know we don't need anything before it.
    Chris Cranford
    @Naros
    So if your algorithm used scn instead, I think you'd have a really nice setup for purging archive logs when Debezium is done with them.
    yuristpsa
    @yuristpsa

    great, understood Chris.

    In addition to my archivelog cleanup strategy, how do you usually do to exclude this area from the archivelog?

    Do you know if there is any way to do this natively, maybe after a period of time?

    Chris Cranford
    @Naros
    @yuristpsa So archive log cleanup is really something that's often driven by some personal preference & needs of the organization for instance / disaster recovery.
    For example at my last job, we kept all archive logs on the production database server for 72 hours, they were zip-archived after 24 to reduce space as well as shipped to a standby upon creation for failover purposes.
    It was all automated using fancy shell scripts :)
    But they weren't using Debezium so, shame on them :P
    So at least for us when testing, I typically do a manual rman cleanup of archive logs before I run the test suite just to guarantee the container doesn't run out of space.
    So there isn't anything automated or fancy, its just making sure we "do the right thing" kinda approach.
    Most production systems follow my last job's approach and either using the database scheduler to handle it as you did or use cron jobs to accomplish the same task.
    Chris Cranford
    @Naros
    IIRC we had some disk space check alert notification that would go out if say the archive log destination reached 75% capacity, 85%, and 95% respectively so that anyone with DBA rights was able to login and do any sort of manual cleanup of the destination should some rogue process or bad user had done something to generate too many logs, i.e. unexpected data loads were almost always the culprit since devs would do whatever without telling the admin team lol.
    yuristpsa
    @yuristpsa

    I need to specify the necessary permissions to create Oracle Debezium user for my customers's DBA, would like to specify only the minimum permissions necessary to avoid questioning.

    Realized that with my connector's current settings can omit the permissions below:

    GRANT CREATE TABLE TO c##debezium CONTAINER=ALL;
    GRANT ALTER ANY TABLE TO c##debezium CONTAINER=ALL;
    GRANT CREATE SEQUENCE TO c##debezium CONTAINER=ALL;
    GRANT FLASHBACK ANY TABLE TO c##debezium CONTAINER=ALL;
    GRANT SELECT ANY TABLE TO c##debezium CONTAINER=ALL;
    GRANT EXECUTE_CATALOG_ROLE TO c##debezium CONTAINER=ALL;
    GRANT SELECT ANY TRANSACTION TO c##debezium CONTAINER=ALL;

    These permissions are mandatory, without them connector crashes or generates some kind of error:

    GRANT CREATE SESSION TO c##debezium CONTAINER=ALL;
    GRANT SET CONTAINER TO c##debezium CONTAINER=ALL;
    GRANT SELECT ON V_$DATABASE to c##debezium CONTAINER=ALL;
    GRANT SELECT_CATALOG_ROLE TO c##debezium CONTAINER=ALL;
    GRANT LOGMINING TO c##debezium CONTAINER=ALL;
    GRANT EXECUTE ON DBMS_LOGMNR TO c##debezium CONTAINER=ALL;
    GRANT EXECUTE ON DBMS_LOGMNR_D TO c##debezium CONTAINER=ALL;
    GRANT LOCK ANY TABLE TO c##debezium CONTAINER=ALL;

    Is there any way to omit "LOCK ANY TABLE" permission ?

    I thought that using the settings below on the connector this would be possible, but without "LOCK ANY TABLE"no history topic is created.

    In this scenario get the error "DML for table 'TABLE_NAME' that is not known to this connector, skipping".

    snapshot.locking.mode: none
    log.mining.strategy: online_catalog

    Any advice is welcome.

    1 reply
    Drugoy
    @Drugoy

    What is GROUP_ID / group.id?

    A unique string that identifies the Connect cluster group this Worker belongs to.
    New workers will either start a new group or join an existing one with a matching group.id. Workers then coordinate with the consumer groups to distribute the work to be done.

    Where is that group used in kafka? Is that Consumer Group name?

    1 reply
    flyingwww1128
    @flyingwww1128
    I have configured multiple tables ("table.include.list": "JYDB.SECUMAIN,JYDB.BOND_CONBDISSUE,JYDB.CT_INDUSTRY,JYDB.BOND_CODE,JYDB.BOND_CODE_SE") but my Kafka only receives data from one table, The connetor did not report an error. Where do I need to find the reason?
    3 replies