Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 28 19:47
    PlugaruT review_requested #3044
  • Jan 28 19:07
    PlugaruT synchronize #3044
  • Jan 28 18:51
    PlugaruT synchronize #3044
  • Jan 28 18:22
    Naros synchronize #3173
  • Jan 28 18:16
    Naros synchronize #3128
  • Jan 28 18:15
    Naros closed #3172
  • Jan 28 18:15
    Naros labeled #3172
  • Jan 28 18:15
    Naros labeled #3172
  • Jan 28 15:52
    Naros synchronize #3172
  • Jan 28 15:39
    PlugaruT synchronize #3044
  • Jan 28 15:17
    Naros labeled #3173
  • Jan 28 15:17
    Naros review_requested #3173
  • Jan 28 15:17
    Naros labeled #3173
  • Jan 28 15:17
    Naros opened #3173
  • Jan 28 14:56
    jcechace closed #3162
  • Jan 28 13:05
    ani-sha review_requested #3172
  • Jan 28 13:05
    ani-sha opened #3172
  • Jan 28 10:46
    rk3rn3r review_requested #3151
  • Jan 28 09:04
    jpechane opened #3171
  • Jan 27 22:48
    mdrillin ready_for_review #3151
Chris Cranford
@Naros
It would appear both 12 and 19 have this limitation.
Gunnar Morling
@gunnarmorling
i see
Jiri Pechanec
@jpechane
@Naros I am in for the warning. Just the wording should be careful to imply there might be a problem not that there is a problem and asking user to verify
hkokay
@h_kokay_twitter
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:284)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:338)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:256)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1266480 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Can someone please tell me how to resolve this? This is coming from a Postgres source connector...Is there a configuration i can set?
Gunnar Morling
@gunnarmorling
hey @jpechane good morning!
I'll be 5 min late, will ping you
Jiri Pechanec
@jpechane
@gunnarmorling Good morning, ok
Gunnar Morling
@gunnarmorling
@jpechane ready :)
Gunnar Morling
@gunnarmorling
@jpechane any comments on debezium/debezium#2822
(apart from the merge conflicts)
objections to merging it?
Hossein Torabi
@blcksrx
Guys, can you help me on this? debezium/debezium#2823
I have no idea why the test failed
Gunnar Morling
@gunnarmorling
@Naros hey there
not much more missing for debezium/debezium#2817 right?
Chris Cranford
@Naros
@gunnarmorling Nope not that I am aware, I'll remove the logging of the values tomorrow when I'm back and then we can merge it.
Chris Cranford
@Naros
@gunnarmorling Went ahead and sent the commit for the fix since it was real quick and I had to deal with another change as well.
I'll follow-up on any other PR comments tomorrow morning when I'm back.
Gunnar Morling
@gunnarmorling
@Naros sounds good!
and sorry, didn't know you were out
hey @jpechane
can you remind me: when did we want to do the master -> main change?
after Alpha1 is out?
Jiri Pechanec
@jpechane
@gunnarmorling Hi, yes exactly
Gunnar Morling
@gunnarmorling
and alpha1 is planned for next week monday, correct, @jpechane?
Jiri Pechanec
@jpechane
@gunnarmorling Yes
Gunnar Morling
@gunnarmorling
ok, cool; I'll send out an email then to announce this change
only impact for folks really would be to adjust their own local names to point to main rather than master
Payam
@pysf
Hey everybody, I am receiving NullPointerException on MySQL connector, The issue blocked the whole process on my side. I am not sure if it is a misconfiguration or a bug. I would appreciate any comments on that.
https://issues.redhat.com/browse/DBZ-4166
Chris Cranford
@Naros
Hi @pysf you mention that another table works, does the other table have a _ in its name?
Payam
@pysf
@Naros Yes the other table name is cached_products.
Gunnar Morling
@gunnarmorling
@Naros hey; can you send a reminder to the ML about the chat room change scheduled for tomorrow (as a reply to your original one)
Chris Cranford
@Naros
@gunnarmorling Yep, was going to do that around mid-day but I can do it now if you'd like.
Hi @pysf my apologies I didn't see you added the full log to the issue; okay I see the problem; looks like the parsing error allows the connector to proceed due to database.history.skip.unparseable.ddl=true; so we need to fix the grammar bug you've illustrated, thanks!
That's why you get the NPE
I don't believe normally that's a problem during streaming but I don't think we intended there to be CREATE TABLE syntax problems during snapshot like we do streaming.
Gunnar Morling
@gunnarmorling
@Naros any time which suits you
thx!
Payam
@pysf
@Naros Thank you so much! Please let me know if you need logs on different levels(Trace, Debug) or other details.
I also attached the docker-compose.yml file to the issue.
Chris Cranford
@Naros
Thanks @pysf, I believe we have all we need from what you've provided but if that changes we'll let you know on the jira. Thanks for the report!
Gunnar Morling
@gunnarmorling
all, updated the roadmap a bit for Debezium 1.8: https://twitter.com/gunnarmorling/status/1450734810402537475
Anisha Mohanty
@ani-sha
Hey all, Is it for me, or is JIRA out of service?
Naresh Kumar
@naresh-chaudhary

Hi team, I am getting the below error after whitelisting a new table in mysql connector (version 1.1Final)

org.apache.kafka.connect.errors.ConnectException: Encountered change event for table dominus.payment whose schema isn't known to this connector
    at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
    at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207)
    at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:536)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1095)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:943)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)
    at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table dominus.payment whose schema isn't known to this connector
    at io.debezium.connector.mysql.BinlogReader.informAboutUnknownTableIfRequired(BinlogReader.java:794)
    at io.debezium.connector.mysql.BinlogReader.handleUpdateTableMetadata(BinlogReader.java:768)
    at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:519)
    ... 5 more

@MrTrustworthy you also faced similar issue earlier, what is the resolution here?

2 replies
Jiri Pechanec
@jpechane
@gunnarmorling MongoDB is green -, the PR is ready for review debezium/debezium#2818
Gunnar Morling
@gunnarmorling
@jpechane woohoo
will take a look tomorrow
isn't this supported for MySQL and Oracle now?
I see changes to the Oracle parser, but doc updates only for MySQL
@jpechane speaking of docs, something like ColumnImpl should neither face up in docs of the actual option nor in actual user docs; it's an implementation detail and no one will know what this is about
Jiri Pechanec
@jpechane
@gunnarmorling debezium/debezium#2837