Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 15:04
    Naros edited #3504
  • 15:04
    Naros ready_for_review #3504
  • 15:04
    Naros opened #3504
  • 15:04
    Naros review_requested #3504
  • 12:56
    jpechane closed #3486
  • 12:55
    jpechane synchronize #3486
  • 12:53
    jpechane synchronize #3486
  • 12:42
    jpechane closed #3502
  • 12:32
    jpechane closed #3490
  • 12:21
    jpechane review_requested #3503
  • 12:20
    jpechane opened #3503
  • 10:34
    ramanenka opened #3502
  • 08:33
    y5w review_requested #3490
  • 08:33
    y5w synchronize #3490
  • 08:27
    jpechane unlabeled #3456
  • 08:10
    jpechane synchronize #3479
  • 05:54
    vjuranek synchronize #3499
  • 03:56
    jpechane closed #3501
  • May 16 22:52
    vjuranek ready_for_review #3501
  • May 16 13:26
    vjuranek opened #3501
Chris Cranford
@Naros
Just doesn't seem all that helpful to me other than indicating we saw the event but we couldn't do anything about it.
Chris Cranford
@Naros

if we can issue it without too many false positives, that is

So the docs say when compatibility is set to 12.2 or higher, table/column names can be up to 128 bytes in length rather than 30 bytes.
We can check this compatibility setting via SQL but that doesn't seem to matter when looking at an Oracle 12 R2 db (my local instance is set to 12.2.0 compatibility by default).
People have reported that Streams, Xstreams, and LogMiner were not adjusted; only that users can create tables with names using the longer lengths.
I'll check Oracle 19 to be sure whether it exhibits the same limitation.

Chris Cranford
@Naros
It would appear both 12 and 19 have this limitation.
Gunnar Morling
@gunnarmorling
i see
Jiri Pechanec
@jpechane
@Naros I am in for the warning. Just the wording should be careful to imply there might be a problem not that there is a problem and asking user to verify
hkokay
@h_kokay_twitter
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:284)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:338)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:256)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1266480 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Can someone please tell me how to resolve this? This is coming from a Postgres source connector...Is there a configuration i can set?
Gunnar Morling
@gunnarmorling
hey @jpechane good morning!
I'll be 5 min late, will ping you
Jiri Pechanec
@jpechane
@gunnarmorling Good morning, ok
Gunnar Morling
@gunnarmorling
@jpechane ready :)
Gunnar Morling
@gunnarmorling
@jpechane any comments on debezium/debezium#2822
(apart from the merge conflicts)
objections to merging it?
Hossein Torabi
@blcksrx
Guys, can you help me on this? debezium/debezium#2823
I have no idea why the test failed
Gunnar Morling
@gunnarmorling
@Naros hey there
not much more missing for debezium/debezium#2817 right?
Chris Cranford
@Naros
@gunnarmorling Nope not that I am aware, I'll remove the logging of the values tomorrow when I'm back and then we can merge it.
Chris Cranford
@Naros
@gunnarmorling Went ahead and sent the commit for the fix since it was real quick and I had to deal with another change as well.
I'll follow-up on any other PR comments tomorrow morning when I'm back.
Gunnar Morling
@gunnarmorling
@Naros sounds good!
and sorry, didn't know you were out
hey @jpechane
can you remind me: when did we want to do the master -> main change?
after Alpha1 is out?
Jiri Pechanec
@jpechane
@gunnarmorling Hi, yes exactly
Gunnar Morling
@gunnarmorling
and alpha1 is planned for next week monday, correct, @jpechane?
Jiri Pechanec
@jpechane
@gunnarmorling Yes
Gunnar Morling
@gunnarmorling
ok, cool; I'll send out an email then to announce this change
only impact for folks really would be to adjust their own local names to point to main rather than master
Payam
@pysf
Hey everybody, I am receiving NullPointerException on MySQL connector, The issue blocked the whole process on my side. I am not sure if it is a misconfiguration or a bug. I would appreciate any comments on that.
https://issues.redhat.com/browse/DBZ-4166
Chris Cranford
@Naros
Hi @pysf you mention that another table works, does the other table have a _ in its name?
Payam
@pysf
@Naros Yes the other table name is cached_products.
Gunnar Morling
@gunnarmorling
@Naros hey; can you send a reminder to the ML about the chat room change scheduled for tomorrow (as a reply to your original one)
Chris Cranford
@Naros
@gunnarmorling Yep, was going to do that around mid-day but I can do it now if you'd like.
Hi @pysf my apologies I didn't see you added the full log to the issue; okay I see the problem; looks like the parsing error allows the connector to proceed due to database.history.skip.unparseable.ddl=true; so we need to fix the grammar bug you've illustrated, thanks!
That's why you get the NPE
I don't believe normally that's a problem during streaming but I don't think we intended there to be CREATE TABLE syntax problems during snapshot like we do streaming.
Gunnar Morling
@gunnarmorling
@Naros any time which suits you
thx!
Payam
@pysf
@Naros Thank you so much! Please let me know if you need logs on different levels(Trace, Debug) or other details.
I also attached the docker-compose.yml file to the issue.
Chris Cranford
@Naros
Thanks @pysf, I believe we have all we need from what you've provided but if that changes we'll let you know on the jira. Thanks for the report!
Gunnar Morling
@gunnarmorling
all, updated the roadmap a bit for Debezium 1.8: https://twitter.com/gunnarmorling/status/1450734810402537475
Anisha Mohanty
@ani-sha
Hey all, Is it for me, or is JIRA out of service?
Naresh Kumar
@naresh-chaudhary

Hi team, I am getting the below error after whitelisting a new table in mysql connector (version 1.1Final)

org.apache.kafka.connect.errors.ConnectException: Encountered change event for table dominus.payment whose schema isn't known to this connector
    at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
    at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207)
    at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:536)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1095)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:943)
    at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)
    at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table dominus.payment whose schema isn't known to this connector
    at io.debezium.connector.mysql.BinlogReader.informAboutUnknownTableIfRequired(BinlogReader.java:794)
    at io.debezium.connector.mysql.BinlogReader.handleUpdateTableMetadata(BinlogReader.java:768)
    at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:519)
    ... 5 more

@MrTrustworthy you also faced similar issue earlier, what is the resolution here?

2 replies
Jiri Pechanec
@jpechane
@gunnarmorling MongoDB is green -, the PR is ready for review debezium/debezium#2818
Gunnar Morling
@gunnarmorling
@jpechane woohoo
will take a look tomorrow
isn't this supported for MySQL and Oracle now?
I see changes to the Oracle parser, but doc updates only for MySQL