Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 07:07
    jpechane closed #2380
  • 05:39
    morozov ready_for_review #2382
  • 01:17
    morozov synchronize #2382
  • 00:35
    morozov edited #2382
  • 00:17
    morozov synchronize #2382
  • 00:03
    morozov opened #2382
  • May 13 23:44
    ccollingwood closed #2381
  • May 13 22:00
    ccollingwood opened #2381
  • May 13 17:24
    Naros synchronize #2371
  • May 13 17:16
    Naros synchronize #2347
  • May 13 17:14
    Naros opened #2380
  • May 13 17:02
    roldanbob closed #2379
  • May 13 16:49
    roldanbob opened #2379
  • May 13 15:08
    jpechane opened #226
  • May 13 14:48
    truman303 opened #2378
  • May 13 14:19
    Naros synchronize #2371
  • May 13 12:59
    novotnyJiri synchronize #2338
  • May 13 12:56
    novotnyJiri synchronize #2338
  • May 13 12:47
    Naros synchronize #2370
  • May 13 12:45
    Naros synchronize #2371
avatar13
@avatar13
Снимок экрана 2021-04-16 в 12.12.57.png
@jpechane i`m trying to use compiled version debezium-mysql-connector with some changes
Снимок экрана 2021-04-16 в 12.14.37.png
for testing i use debezium docker container with replace debezium-connector-mysql jar file
in 0.9 version this method worked =)
CREATE TABLE `keyword_dimension` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `hash` char(32) NOT NULL DEFAULT '' COMMENT 'md5(raw=true) от search_engine, keyword',
  `search_engine` enum('','yandex','google','rambler','alltheweb','aol','ask','bing','yahoo','baidu','mail.ru') NOT NULL DEFAULT '' COMMENT 'Поисковая система',
  `keyword` char(128) NOT NULL DEFAULT '' COMMENT 'Ключевое слово',
  PRIMARY KEY (`id`),
  KEY `idx_search_engine` (`search_engine`) `CLUSTERING`=YES,
  KEY `idx_keyword` (`keyword`) `CLUSTERING`=YES,
  KEY `idx_hash` (`hash`)
) ENGINE=TokuDB AUTO_INCREMENT=346233 DEFAULT CHARSET=utf8 `compression`='tokudb_zlib'
Jiri Pechanec
@jpechane
@avatar13 This exception is thrown by code that processes default values in DDL. 0.9 did not have such functionality
avatar13
@avatar13
@jpechane i need to copy other .jars too?
Jiri Pechanec
@jpechane
@avatar13 You must take the full content of the tar.gz file built by Maven
avatar13
@avatar13
@jpechane thank you! i added more jar files from build to copy into container and this helped.
René Kerner
@rk3rn3r
@gunnarmorling I think you were writing some comments while we were talking about debezium/debezium#2273 but I can't see any. Can you check if by accident you still have the tab open and submitted?
Gunnar Morling
@gunnarmorling
oh, you're right
just done
René Kerner
@rk3rn3r
:pray:
Thimxn
@thimxns_twitter

Hey guys, I'm currently setting up debezium (in kafka) for a mssql server (on another machine in the network). So far everything seemed to work fine however, when checking the status of the connector i get the following response:
{"name":"test-connector","connector":{"state":"RUNNING","worker_id":"localhost:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"localhost:8083","trace":"org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata\n"}],"type":"source"} while it used to be this in the first minute or so after starting it: {"name":"test-connector","connector":{"state":"RUNNING","worker_id":"localhost:8083"},"tasks":[{"id":0,"state":"RUNNING","worker_id":"localhost:8083"}],"type":"source"}.

The initial connector setup looks like the following:

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "test-connector", "config": { "connector.class": "io.debezium. connector.sqlserver.SqlServerConnector", "database.hostname": "192.168.230.30",
"database.port": "1433", "database.user": "***","database.password": "***", "database.dbname": "aqotec_Daten", "database.server.name": "AQNEU", "table.whitelist": "dbo.RM360_L2R12", "database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "dbhistory.fulfillment"  } }';

Furthermore when starting the consumer using this:

sudo docker run -it --rm --name consumer --link zookeeper:zookeeper --link kafka2:kafka2 debezium/kafka:1.1 watch-topic -a AQNEU.aqotec_Daten.dbo.RM360_L2R12 --max-messages 1

I get no errors but it also doesnt track any of the changes:

WARNING: Using default BROKER_ID=1, which is valid only for non-clustered installations.
Using ZOOKEEPER_CONNECT=172.17.0.2:2181
Using KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.17.0.5:9092
Using KAFKA_BROKER=172.17.0.4:9092
Contents of topic AQOTECNEU.aqotec_Daten.dbo.RM360_L2R12:

If I would use the wrong watch-topic -a it would throw me an error so I guess its detecting the connector, but the connector is not working.

Since im trying to give you as much information as I can, here is my docker setup:

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS
    NAMES
17ee16d7d123        debezium/kafka:1.1       "/docker-entrypoint.…"   15 minutes ago      Up 15 minutes       8778/tcp, 9092/tcp, 9779/tcp
    consumer
f41d9023971b        debezium/connect         "/docker-entrypoint.…"   8 days ago          Up 15 minutes       8778/tcp, 9092/tcp, 0.0.0.0:8083->8083/tcp, 9779/tcp
    connect
cb6da30727b3        debezium/kafka:1.1       "/docker-entrypoint.…"   8 days ago          Up 8 days           8778/tcp, 9779/tcp, 0.0.0.0:9092->9092/tcp
    kafka2
d027f3e800a7        debezium/zookeeper:1.4   "/docker-entrypoint.…"   8 days ago          Up 8 days           0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 8778/tcp, 0.0.0.0:3888->3888/tcp, 9779/tcp   zookeeper

Im rather new to debezium and docker so it could be just some simple setup that I messed up.

I already tried to find solutions for this but couldn't find any that worked so I decided to ask here. Thanks already for your help.

Gunnar Morling
@gunnarmorling
@Naros do you think you have an answer for this user: https://twitter.com/karim_elna/status/1383081141465403407
perhaps an enhancement request falls out of it? I'm not sure
Chris Cranford
@Naros
@gunnarmorling It sounds to me like the SMT complains that it cannot find field aggregatetype since Hibernate creates it as "aggregate_type". Is that how you read it?
Chris Cranford
@Naros
We added a way to customize the naming at build time, I think it's something like quarkus.debezium-outbox.aggregate-type.name=<value>
That way the generated XML could match whatever naming strategy they needed in the db.
Gunnar Morling
@gunnarmorling
@Naros yes, exactly, that's how i read it
but this guy is using spring boot
so not our extension for quarkus
so probably just need to adjust the column names?
Chris Cranford
@Naros
Ah, well in that case wouldn't they just change the route.by.field in the connector's SMT configuration to use aggregate_typerather than the default aggregatetype?
Gunnar Morling
@gunnarmorling
ah, yes, right
that'd work too
Chris Cranford
@Naros
Pretty sure we made sure that all this stuff was super flexible re naming.
Gunnar Morling
@gunnarmorling
do you want to reply?
or should i
Chris Cranford
@Naros
Could you, I don't use my twitter too often and don't recall the pwd.
Gunnar Morling
@gunnarmorling
LOL, ok :)
Chris Cranford
@Naros
tbh, looking at the quarkus extension I was struggling to remember what I wrote lol
it feels like ages since I last looked at it lol
Gunnar Morling
@gunnarmorling
hehe, yeah, has been a while
Sanjeev Singh
@sanjeevhbti45_twitter

Can someone helps here i am facing below error in debezium MySQL connector while Deleting the rows.
Error : eventType=EXT_DELETE_ROWS

{"name":"connect-01","connector":{"state":"RUNNING","worker_id":"100.0.0.111:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"00.0.0.111:8083","trace":"org.apache.kafka.connect.errors.ConnectException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1618682018000, eventType=EXT_DELETE_ROWS, serverId=520243567, headerLength=19, dataLength=8137, nextPosition=3791091837, flags=0}\n\tat io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)\n\tat io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207)\n\tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:600)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1130)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:978)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:581)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:860)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: java.lang.RuntimeException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1618682018000, eventType=EXT_DELETE_ROWS, serverId=520243567, headerLength=19, dataLength=8137, nextPosition=3791091837, flags=0}\n\tat io.debezium.connector.mysql.BinlogReader.handleServerIncident(BinlogReader.java:668)\n\tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:583)\n\t... 5 more\nCaused by: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1618682018000, eventType=EXT_DELETE_ROWS, serverId=520243567, headerLength=19, dataLength=8137, nextPosition=3791091837, flags=0}\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:300)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.nextEvent(EventDeserializer.java:223)\n\tat io.debezium.connector.mysql.BinlogReader$1.nextEvent(BinlogReader.java:249)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:957)\n\t... 3 more\nCaused by: java.io.EOFException\n\tat com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.read(ByteArrayInputStream.java:190)\n\tat java.base/java.io.InputStream.read(InputStream.java:271)\n\tat java.base/java.io.InputStream.skip(InputStream.java:531)\n\tat com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.skipToTheEndOfTheBlock(ByteArrayInputStream.java:216)\n\tat com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:296)\n\t... 6 more\n"}],"type":"source"}

Gunnar Morling
@gunnarmorling
hey @jpechane, good morning
got something for you (maybe) :)
the other day, i wanted to understand how offsets are handled for source connectors with more than one task
so to see what will change for the sql server work
in order to do so, i created a super-simple source connector for etcd: https://github.com/gunnarmorling/kcetcd
(etcd has an easy-to-use "watch" feature which allows to implement CDC style functionality)
i consider this a learning vehicle for me, us, and others; e.g. we can use this for exploring that new exactly-once support in KC
but, i also thought you might be using it for a first PoC implementation of the new snapshotting
it's completely independent of debezium, a plain multi-task source
with proper resumeability
curious what you think :)
René Kerner
@rk3rn3r
Does that mean changes coming to the Mongo DB connector soon?
Jiri Pechanec
@jpechane
@rk3rn3r Hi, no, it does not
René Kerner
@rk3rn3r
thx for the detailed explanation :laughing: