Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Chris Cranford
@Naros
This didn't come up until we added default value support in DBZ-1491
This is what triggered this problem with the Oracle driver & only under LogMiner which uses a thin rather than oci connection.
Jiri Pechanec
@jpechane
We are in real need of Oracle CI matrix :-(
Chris Cranford
@Naros
Indeed.
Chris Cranford
@Naros
So @jpechane, I spotted one issue with system properties. Inside Configuration#withSystemProperties, we apply a toLowerCase() on the keys which will cause the driver not to recognize the setting since it explicitly expected it to use camel-case. Is there a reason why we explicitly convert them to lower-case?
Jiri Pechanec
@jpechane
@Naros I don't think we should be doing anything with system properties when JDBC driver supports the configuration.
Hossein Torabi
@blcksrx

Hi guys. recently I found and issue related to Debezium and that is Debezium sets the exact types of a database field on the connect.name for example this schema:

{\"name\":\"created_at\",\"type\":[\"null\",{\"type\":\"string\",\"connect.version\":1,\"connect.name\":\"io.debezium.time.ZonedTimestamp\"}]

And when the user wants to use the sink connector like JDBC sink connector the field's type sets wrong(In this example it will be set to String). So it is better to convert the types in the io.debezium.transforms.ExtractNewRecordState class. do you agree?

5 replies
Hussain
@pshussain
State: FAILED
Trace: java.lang.RuntimeException: Couldn't obtain database name
at io.debezium.connector.sqlserver.SqlServerConnection.retrieveRealDatabaseName(SqlServerConnection.java:423)
at io.debezium.connector.sqlserver.SqlServerConnection.<init>(SqlServerConnection.java:91)
at io.debezium.connector.sqlserver.SqlServerConnectorTask.start(SqlServerConnectorTask.java:72)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'sa'. ClientConnectionId:6985933a-b520-4407-bb90-e4bb4fd7c73d
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:258)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:104)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:5036)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3668)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:94)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3627)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2935)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2456)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:2103)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1950)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1162)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:735)
at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$1(JdbcConnection.java:191)
at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:789)
at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:784)
at io.debezium.jdbc.JdbcConnection.queryAndMap(JdbcConnection.java:555)
at io.debezium.jdbc.JdbcConnection.queryAndMap(JdbcConnection.java:429)
at io.debezium.connector.sqlserver.SqlServerConnection.retrieveRealDatabaseName(SqlServerConnection.java:418)
... 11 more
@Naros @jpechane is there any work around?
Jiri Pechanec
@jpechane
@pshussain Hi, it looks like wrong credentials Login failed for user 'sa'
Hussain
@pshussain
ok let me check
Chris Cranford
@Naros
@jpechane Regarding the COLUMN_DEF (default-value) issue with Oracle, supposedly there is a different workaround where you should select the LONG / LONGRAW columns first from the result-set since the driver streams those at the start of the result-set followed by the other columns. I'm going to see if just moving the order of ResultSet fetches makes a difference.
Chrisss93
@Chrisss93
Just wanted to pop in and say, the Debezium dev team is doing a fantastic job! One of the most transparent, documented and responsive OSS projects I've come across. Here's to 1.4 :)
2 replies
Kaiux
@Kaiux
Hi, I want to control the order of snapshots and follow the order specified in the configuration "table.whiteList". Can the code of this feature be merged into the code base? If possible, I can submit a PR.
Thank you.
4 replies
Igor Orlenko
@iorlenko
Hey guys, need help! I configured debezium/connect:1.2 image to use SALS authentication (by using e.g. CONNECT_SECURITY_PROTOCOL env vairable). Kafka connect started fine, created specified offset topics. ProducerConfig values from log looks fine. But once I POST my Postgres config, it writes new ProducerConfig values in log with the wrong e.g. security.protocol, so that the producer fails to connect to Kafka. Where these parameters are taken from? how to specify producer/consumer config? Why they're overridden by a connector config?
Also I tried to specify database.history.producer.* parameters without any success
hussain-k1
@hussain-k1
    ... 33 more

Caused by: java.lang.NoSuchFieldError: CONFIG_DEFINITION
at io.debezium.connector.sqlserver.SqlServerConnectorConfig.<clinit>(SqlServerConnectorConfig.java:328)
at io.debezium.connector.sqlserver.SqlServerConnector.config(SqlServerConnector.java:57)
at org.apache.kafka.connect.connector.Connector.validate(Connector.java:129)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:313)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more

@Naros @jpechane Any idea on the above error

3 replies
started coming suddenly
Ekansh Bansal
@Ace-Bansal
Hello folks, has someone here worked with Debezium and the BigQuery connector built by WePay? Can someone tell me whether dropping and modifying columns is allowed using that sink connector (since BigQuery doesn't allow dropping and modifying columns in an existing table). I've tested the edge case of adding columns in my database and that gets reflected in my BigQuery warehouse, however I'm not able to delete and modify columns since the BigQuery connector is throwing an error which says that the provided schema isn't correct (after I drop/modify the columns)
1 reply
Alon Reznik
@Alonreznik

Hi All.
When there's a connection timeout from the DB (PG in this case), and we're trying to create a new connector, we're encountering GATEWAY TIMEOUT ERROR 504 from the kafka connect, with an HTML returning back. Can the error be pass to the kafka connect rest API response in this case?

We're using Debezium 1.3.
This is the error in the logs:

{"level":"ERROR","timestamp":"2020-10-15 09:35:47,365","thread":"pool-2-thread-12","line":"178","message":"Error in JSch connection","category":"io.debezium.jdbc.JdbcConnection","component":"NON","throwable":"com.jcraft.jsch.JSchException: java.net.ConnectException: Connection timed out (Connection timed out)
    at com.jcraft.jsch.Util.createSocket(Util.java:349)
    at com.jcraft.jsch.Session.connect(Session.java:215)
    at com.jcraft.jsch.Session.connect(Session.java:183)
    at io.debezium.jdbc.JdbcConnection.configureSsh(JdbcConnection.java:166)
    at io.debezium.jdbc.JdbcConnection.<init>(JdbcConnection.java:441)
    at io.debezium.jdbc.JdbcConnection.<init>(JdbcConnection.java:420)
    at io.debezium.connector.postgresql.connection.PostgresConnection.<init>(PostgresConnection.java:75)
    at io.debezium.connector.postgresql.connection.PostgresConnection.<init>(PostgresConnection.java:86)
    at io.debezium.connector.postgresql.PostgresConnector.validate(PostgresConnector.java:102)
    at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:375)
    at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.net.ConnectException: Connection timed out (Connection timed out)
    at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
    at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
    at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
    at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
    at java.base/java.net.Socket.connect(Socket.java:609)
    at java.base/java.net.Socket.connect(Socket.java:558)
    at java.base/java.net.Socket.<init>(Socket.java:454)
    at java.base/java.net.Socket.<init>(Socket.java:231)
    at com.jcraft.jsch.Util.createSocket(Util.java:343)
    ... 15 more

And this is the response from Kafka Connect API:

<html>
    <head>
        <title>504 Gateway Time-out</title>
    </head>
    <body>
        <center>
            <h1>504 Gateway Time-out</h1>
        </center>
    </body>
</html>

Thanks!

lanapetrova
@lanapetrova
Hi guys! I've found an issue in io.debezium.jdbc.JDBCConnection class (debezium-core project) when running Oracle connector. It seems the problem was introduced starting with this commit: https://github.com/debezium/debezium/commit/a8ea5e2256c77cfa438c1e9ae92d46f7bb79140f#diff-f769ed08a5bdabef79248e0da6391cad3b1dd60e80848d22f342779f4999833e . Before this commit, a column default value was not extracted from metadata. Now if an Oracle table column has a default value set, this line of code (current master branch, line 1179) - final String defaultValue = columnMetadata.getString(13); - throws an exception: Caused by: java.sql.SQLException: Stream has already been closed
at oracle.jdbc.driver.LongAccessor.getBytesInternal(LongAccessor.java:127)
at oracle.jdbc.driver.T2CLongAccessor.getBytesInternal(T2CLongAccessor.java:69)
at oracle.jdbc.driver.Accessor.getBytes(Accessor.java:897)
at oracle.jdbc.driver.LongAccessor.getString(LongAccessor.java:154)
at oracle.jdbc.driver.GeneratedStatement.getString(GeneratedStatement.java:287)
at oracle.jdbc.driver.GeneratedScrollableResultSet.getString(GeneratedScrollableResultSet.java:374)
at io.debezium.jdbc.JdbcConnection.readTableColumn(JdbcConnection.java:1179)
at io.debezium.connector.oracle.OracleConnection.readTableColumn(OracleConnection.java:171)
at io.debezium.jdbc.JdbcConnection.readSchema(JdbcConnection.java:1126)
at io.debezium.connector.oracle.OracleConnection.readSchema(OracleConnection.java:233)
at io.debezium.connector.oracle.OracleSnapshotChangeEventSource.readTableStructure(OracleSnapshotChangeEventSource.java:234) . The problem seems to be related to ojdbc8 driver. If that line 1179 would be surrounded with try/catch, it would help to continue the snapshot process instead of interrupting it.
4 replies
Idrees Mohammed
@imohammed-pt_gitlab
Hello all can anyone help me with this issue I am facing with Sql connector using debezium
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:290)
connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:316)
connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:240)
connect_1 | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
connect_1 | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
connect_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
connect_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
connect_1 | at java.base/java.lang.Thread.run(Thread.java:834)
connect_1 | Caused by: org.apache.kafka.connect.errors.DataException: BigDecimal has mismatching scale value for given Decimal schema
connect_1 | at org.apache.kafka.connect.data.Decimal.fromLogical(Decimal.java:68)
connect_1 | at org.apache.kafka.connect.json.JsonConverter$13.toJson(JsonConverter.java:206)
connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:606)
connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:693)
connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:693)
connect_1 | at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:581)
connect_1 | at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:335)
connect_1 | at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:62)
connect_1 | at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:290)
connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect_1 | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
connect_1 | ... 11 more
If anyone can help in this case please respond, this did not happen in UAT/Testing but once services launched in Production I am seeing this issue.
Idrees Mohammed
@imohammed-pt_gitlab
Strange thing is that this connector works correctly for a while and suddenly fails with this error, If I delete the connector and start again from scratch then it works again until it crashes
Ashwin
@ashwin027

Hey Guys! Im looking at the issue DBZ-2328 (https://issues.redhat.com/browse/DBZ-2328) which deals with using an instance name for named sql server instances and this seems to be broken at the moment with the debezium sql server connector. I was wondering if there was a temporary workaround for this? Ive tried the following scenarios so far with no luck,

With the instance name at the end of the hostname,

"database.server.name": "MyDbServer",
"database.port": "1433",
"database.dbname": "MyDB",
"database.hostname": "MyDbServer\CDC",

With the instance name property which isnt documented but I did see it in code,

"database.server.name": "MyDbServer",
"database.port": "1433",
"database.dbname": "MyDB",
"database.hostname": "MyDbServer",
"database.instancename": "CDC",

With the instance name before the DB name,

"database.server.name": "MyDbServer",
"database.port": "1433",
"database.dbname": "CDC\MyDB",
"database.hostname": "MyDbServer",

Anything else I could try?

Btw, I did escape the slashes. Gitter seems to be detecting it :)
Teo Stocco
@zifeo
Is anyone aware of any good comparison/wrap-ups between Debezium (https://www.confluent.io/hub/debezium/debezium-connector-mongodb) and the official Mongo Connector (https://www.confluent.io/hub/mongodb/kafka-connect-mongodb)?
locnguyen14061996
@locnguyen14061996
hii, is there anyway to setup the PostgreSQL sink to another schema, instead of public
3 replies
cause we have multiple Postgre DBs and we need to keep them in different schema in our datawarehouse
Ashwin
@ashwin027

Hey Guys. I decided to try to fix DBZ-2328 (https://issues.redhat.com/browse/DBZ-2328) and I have a fix for it and here is the commit, ashwin027/debezium@efa6cab. Im from the C# world and this is my first foray into the java world, so it would be great if anyone from this community can review my code and provide some feedback.

Here is what I have so far,

  • The solution with the named sql server instance works. The config should contain either the port or the instance name but not both since this will create an invalid sql connection string with both the instance and the port (original defect).
  • All integration tests pass. The approach Ive taken is to use a different connection string pattern only if an instance is specified.

My next steps, for which I will need guidance/help from the experts in this community,

  1. All the integration tests for the sql server connector are on a default sql server instance on docker. This will not suffice for this fix because we will need a named instance. I tried to create a new container using the docker-maven-plugin with an aliased instance and this didnt work well. Although the containers started up, I couldnt access either of them. I dont have much experience with this plugin so it would be great to get some pointers on how to run multiple sql server containers (if this is a viable approach)

  2. It might be a good idea to run the connector tests through two passes, once on the port instance and once on a named instance although this means that we will have two docker containers, one running on a static port (default) and the other one running an aliased (named) instance.

Any Thoughts/Suggestions on my code and a viable integration test approach?

Jiri Pechanec
@jpechane
@ashwin027 Hi, thanks for the contribution, please follow https://github.com/debezium/debezium/blob/master/CONTRIBUTE.md#creating-a-pull-request to create a PR for regular review. Related to your second question, we have multiple tests that use TestContainers to programatically start and stop containers in Java code. Maybe the test for this case could be written in this way. Example is io.debezium.testing.testcontainers.ApicurioRegistryTest
Gunnar Morling
@gunnarmorling
hey all, please join me in welcoming @ani-sha who started today as an intern in Red Hat's Debezium team and who will be with us for the next 6 months
Anisha Mohanty
@ani-sha
thanks! @gunnarmorling
Chris Cranford
@Naros
Hi @ani-sha, welcome aboard!
Ashwin
@ashwin027
@jpechane Thanks for the response! I'll follow the example you provided to create some tests. I should have a PR ready for review very soon. Thanks!
Gunnar Morling
@gunnarmorling
@ani-sha, is there a jira issue for moving jobs from travis to GH actions already? if you don't find any, can you log it, let me know the number and I (or @Naros) can put in some details to get you started
Chris Cranford
@Naros
@ani-sha @gunnarmorling I believe https://issues.redhat.com/browse/DBZ-1720 is the one you're looking for.
Gunnar Morling
@gunnarmorling
@Naros ah, sweet thx
could you rewrite that a little bit: it should mention that this is about all repos and their jobs; providing some context and starting points, so that @ani-sha has a handle
I'd recommend to move over the website job first, because that's the biggest pain point right now
Chris Cranford
@Naros
Sure.
Gunnar Morling
@gunnarmorling
thx!
Chris Cranford
@Naros
@ani-sha I've added some context to the description in https://issues.redhat.com/browse/DBZ-1720.
If you have any questions or if the information isn't informative enough, please don't hesitate to ping us.
I jotted down a few things that came to mind so its possible I might have over-simplified things a bit.
Gunnar Morling
@gunnarmorling
thx, @Naros !
@rk3rn3r hey; so @jpechane will run a bit late, he'll ping us here for getting started with the grooming session
René Kerner
@rk3rn3r
:ok:
Jiri Pechanec
@jpechane
@rk3rn3r @gunnarmorling ping
I am in
René Kerner
@rk3rn3r
there in a second
Gunnar Morling
@gunnarmorling