Welcome to the public discussion channel for the Debezium change data capture open source project (http://debezium.io)
MySQLConnectorA
and it is using MySQLHistoryTopic
and the connector fails in such a way that I just want to delete it and create a new one...MySQLConnectorA
and create MySQLConnectorB
. Can I just reuse the same history topic? Should I always make a new one? what is the best way to handle this scenario?
Hi Team,
We currently have 2 connectors without setting the "decimal.handling" so it would default to "precise". This is fine when we write a streaming application to join/rekey kstream/table. However, now I want to consume it using ksqldb as it's more convenient and less code. But I cannot use aggregate function latest_by_offset as it's wouldn't support integer struct(the numeric number is mapped to ksqldb as integer struct).
So I need to change the decimal.handling to "string" for easy consuming with ksqldb.
So a couple questions :
Thanks
$ mvn clean install
...
[INFO] Debezium UI Build Aggregator ...................... SUCCESS [ 0.181 s]
[INFO] Debezium UI Frontend .............................. SUCCESS [ 47.648 s]
[INFO] Debezium UI Backend ............................... FAILURE [ 1.257 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 49.275 s
[INFO] Finished at: 2021-04-09T09:00:20+00:00
[INFO] Final Memory: 49M/367M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.quarkus:quarkus-maven-plugin:1.12.1.Final:prepare (default) on project debezium-ui-backend: Execution default of goal io.quarkus:quarkus-maven-plugin:1.12.1.Final:prepare failed: Unable to load the mojo 'prepare' (or one of its required components) from the plugin 'io.quarkus:quarkus-maven-plugin:1.12.1.Final': com.google.inject.ProvisionException: Guice provision errors:
[ERROR]
[ERROR] 1) Error injecting: private org.eclipse.aether.spi.log.Logger org.eclipse.aether.internal.impl.DefaultLocalRepositoryProvider.logger
[ERROR] while locating org.eclipse.aether.internal.impl.DefaultLocalRepositoryProvider
[ERROR] while locating java.lang.Object annotated with *
[ERROR] at org.eclipse.sisu.wire.LocatorWiring
[ERROR] while locating org.eclipse.aether.impl.LocalRepositoryProvider
[ERROR] for parameter 8 at org.eclipse.aether.internal.impl.DefaultRepositorySystem.<init>(Unknown Source)
[ERROR] while locating org.eclipse.aether.internal.impl.DefaultRepositorySystem
[ERROR] while locating java.lang.Object annotated with *
[ERROR] at ClassRealm[plugin>io.quarkus:quarkus-maven-plugin:1.12.1.Final, parent: sun.misc.Launcher$AppClassLoader@70dea4e]
[ERROR] while locating io.quarkus.maven.QuarkusBootstrapProvider
[ERROR] while locating io.quarkus.maven.PrepareMojo
[ERROR] at ClassRealm[plugin>io.quarkus:quarkus-maven-plugin:1.12.1.Final, parent: sun.misc.Launcher$AppClassLoader@70dea4e]
[ERROR] while locating org.apache.maven.plugin.Mojo annotated with @com.google.inject.name.Named(value=io.quarkus:quarkus-maven-plugin:1.12.1.Final:prepare)
[ERROR] Caused by: java.lang.IllegalArgumentException: Can not set org.eclipse.aether.spi.log.Logger field org.eclipse.aether.internal.impl.DefaultLocalRepositoryProvider.logger to org.eclipse.aether.internal.impl.PlexusLoggerFactory
[ERROR] at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
[ERROR] at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
[ERROR] at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
[ERROR] at java.lang.reflect.Field.set(Field.java:764)
...
2021-04-09 13:22:01,878 INFO || 3699 records sent during previous 00:00:39.5, last recorded offset: {transaction_id=null, lsn_proc=374194546795464, lsn_commit=374194546795464, lsn=374194546513840, txId=2675225950, ts_usec=1617974521360503} [io.debezium.connector.common.BaseSourceTask]
2021-04-09 13:22:16,507 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:22:16,507 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:22:16,512 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Finished commitOffsets successfully in 5 ms [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:22:46,513 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:22:46,513 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:23:16,548 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:23:16,548 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:23:21,456 INFO || 785807 records sent during previous 00:01:19.578, last recorded offset: {transaction_id=null, lsn_proc=374194549319488, lsn_commit=374194549319488, lsn=374193616936160, txId=2675220979, ts_usec=1617974521525094} [io.debezium.connector.common.BaseSourceTask]
2021-04-09 13:23:46,548 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:23:46,548 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:24:16,549 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:24:16,549 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:24:46,549 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:24:46,549 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:25:16,550 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:25:16,550 INFO || WorkerSourceTask{id=shipment-order-connector-v1-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2021-04-09 13:26:01,454 INFO || 1607505 records sent during previous 00:02:39.998, last recorded offset: {transaction_id=null, lsn_proc=374194549319488, lsn_commit=374194549319488, lsn=374194350949688, txId=2675220979, ts_usec=1617974521525094} [io.debezium.connector.common.BaseSourceTask]
2021-04-09 13:26:16,551 INFO
Hello, everyone!
Anyone knows how to consume messages from Kafka through REST API? I have deployed Debezium Postgres tutorial to AWS ECS and I can connect to Postgres and Kafka Connect and use REST API to operate them. However, when I connect to Kafka endpoint I get this:
curl.exe -H "Accept:appliction/json" <load balancer end point>:9092/
curl: (52) Empty reply from server
Any pointers or reference will be greatly appreciated.
Dear Debezium community. I would like to (re)open a discussion that you probably already talked about for many times. In my company I use a PostgreSQL cluster containing two instances.
Currently there are some debezium connectors active to this cluster and in the perfect world they are doing great. However when the cluster experiences a switchover to the standby instance, the whole challenge starts with the need recreate the replication slots and being sure wether all data are transferred to kafka.
This situation bothers me a lot since we took debezium to production and I yet didnt find a good solution on how to solve that.
The best thing I came up with by today is having a job that restarts failed connectors every 30 seconds, but this is still not quite satisfying.
I also stumbled across the blogpost about "pglogical" you referred to in the documentation. At this point I'm not yet skilled enough about possibilities with postgres. I also thought about creating the replication slots at the time of the failover, so the connectors have the slots already available when the get restarted.
I would appreciate a lot any kind of hint or suggestions of what you have since this topic drains my confidence in using debezium with Postgres for the future.
thx a lot in advance
publication.autocreate.mode: filtered
option. For now I want to extend list of tables, I added new one and as I can see, in the connector config via API, this table was added and field table.include.list
successfully updated. But when I checked tables included in publication I see that there are only two old tables, without new one. Should I re-create connector for apply these settings or it is possible to add something to configuration that will be alter/re-create publication when table list changed?
[Producer clientId=xxx] Bootstrap broker xxx:9093 (id: -1 rack: null) disconnected [org.apache.kafka.clients.NetworkClient]
[Consumer clientId=xxx] Bootstrap broker xxx:9093 (id: -1 rack: null) disconnected [org.apache.kafka.clients.NetworkClient]
Hi All.
Any specifics to connect Kafka behind the load balancer?
I have deployed Debezium Postgres tutorial docker compose to AWS Fargate. It put Kafka behind the load balancer with a single DNS name and port. This host:port itself is accessible but when I try to list topics I get this.
I have no name!@320aed83a282:/opt/bitnami/kafka$ bin/kafka-topics.sh --bootstrap-server audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com:9092 --list
[2021-04-12 16:35:06,512] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/172.31.30.198:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
And 172.31.30.198 is not the IP broker DNS resolves to.
nslookup audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com
Server: Fios_Quantum_Gateway.fios-router.home
Address: 192.168.1.1
Non-authoritative answer:
Name: audit-LoadB-PH107PYVPE77-869abac555f00a22.elb.us-east-1.amazonaws.com
Addresses: 18.210.98.125
3.225.138.174
52.22.188.95
23.21.51.61
What should I do?
What are best practices running Debezium on AWS? It seems that it poorly works with docker compose to AWS ECS translation. Any articles or hints will be greatly appreciated.
Specific questions:
Hello Team!
We have 2 SQL servers running as always on cluster with Debezium 1.2.5 reading data from replica server.
Sometimes when connector starts we can see only the following entries in the log:[2021-04-12T20:57:13.788297973Z 2021-04-12T20:57:13.788+00:00 INFO || Creating connector CONNECTOR-NAME of type io.debezium.connector.sqlserver.SqlServerConnector [org.apache.kafka.connect.runtime.Worker]
2021-04-12T20:57:13.790568560Z 2021-04-12T20:57:13.790+00:00 INFO || Instantiated connector CONNECTOR-NAME with version 1.2.5.Final of type class io.debezium.connector.sqlserver.SqlServerConnector [org.apache.kafka.connect.runtime.Worker]
2021-04-12T20:57:13.848842136Z 2021-04-12T20:57:13.848+00:00 INFO || Finished creating connector CONNECTOR-NAME [org.apache.kafka.connect.runtime.Worker]
and Debezium doesn't try to fetch data from change table. Though the latter is being updated with the new records and, which is more confusing, - the connector status is RUNNING.
When we restart the pod with Debezium it starts working fine and performs expected select queries as usual.
The other Debezium instance which runs against a single SQL server (not always on cluster) never suffers this issue.
*Connector config has "database.applicationIntent": "ReadOnly" option set.
Could you please advise whether there are any integration issues with MSSQL Always On clusters?
Receiving an error from aws MSK when starting debezium/connect container (no tasks running) :
connect_1 | 2021-04-13 17:40:39,825 INFO || [Worker clientId=connect-1, groupId=1] SyncGroup failed: The coordinator is not available. Marking coordinator unknown. Sent generation was Generation{generationId=11, memberId='connect-1-bf4841a9-4201-4148-8cb4-d30e36f43483', protocol='sessioned'} [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
connect_1 | 2021-04-13 17:40:39,825 INFO || [Worker clientId=connect-1, groupId=1] Group coordinator XXXX (id: 2147483645 rack: null) is unavailable or invalid due to cause: error response COORDINATOR_NOT_AVAILABLE.isDisconnected: false. Rediscovery will be attempted. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
connect_1 | 2021-04-13 17:40:39,825 INFO || [Worker clientId=connect-1, groupId=1] Rebalance failed. [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
connect_1 | org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available.
connect_1 | 2021-04-13 17:40:39,925 INFO || [Worker clientId=connect-1, groupId=1] Discovered group XXXX (id: 2147483645 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
Anyone else encounter this issue when starting a connector to aws MSK?
Hi @Naros. I am using Debezium for Oracle (v11.2.0.4) and tried out the new v1.5. All our table names are case sensitive and are in CAPS. Eg: "SCHOOLSCHEMA.STUDENTS".
It was fine in v1.4 using with "database.tablename.case.insensitive": false (default).
However, in v1.5 I see no such option and the connector throws an error saying like:'SCHOOLSCHEMA.students' has no supplemental logging configured. Execute <supplemental_sql_statement> on the table.
even though the logging is enabled on (ALL) columns.
Notice how in the error, 'students' table name is in small case in the error line of the log.
Any recommendation on how to make sure the table name stays in CAPS (I need it to be case sensitive) in Debezium v1.5 ?
I know you guys don't mention support for Oracle 11g explicitly but I really want to use 1.5 with the new property "snapshot.select.statement.overrides".
Btw, a big thanks to all the developers of this amazing tool.
database.tablename.case.insensitive=false
in the connector configuration manually should resolve the problem. In 1.6 I'm planning to remove this entirely and this should no longer be a problem.
CREATE TABLE "MYTABLE" ...
where the table name is double-quoted?Yes I have set it to database.tablename.case.insensitive=false
while using the v1.5 connector. However, the table name is being converted to small case and throws the supplemental logging not enabled error.
This error doesn't happen in v1.4 though.
As for being case sensitive in Oracle 11, my bad, they might not be case sensitive as I've said. However I thought that they might be case sensitive as the issue comes back in v1.4 when I set database.tablename.case.insensitive=true
. As far as I'm thinking, this option is not being respected in v1.5?
Apologies. I may have made a mistake while testing it out in v1.5.
I have removed the containers and tried again with v1.5 and setting database.tablename.case.insensitive=false
and it works as expected with my table names being in CAPS.
Sorry for raising a false case. Really appreciate your effort in replying quickly. Thank you!
Note: Don't know if this is relevant but between previous testing and current successful testing, I have removed the property "database.oracle.version": "11"
. Would that property have made a difference?