Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 01:24
    ajunwalker ready_for_review #3519
  • 01:10
    ajunwalker opened #3519
  • 01:08
    Naros unlabeled #3518
  • 00:33
    Naros synchronize #3518
  • 00:32
    Naros labeled #3518
  • 00:32
    Naros labeled #3518
  • 00:32
    Naros labeled #3518
  • 00:32
    Naros opened #3518
  • May 19 22:47
    ajunwalker closed #3517
  • May 19 22:32
    ajunwalker synchronize #3517
  • May 19 22:23
    ajunwalker edited #3517
  • May 19 22:12
    ajunwalker opened #3517
  • May 19 20:59
    Naros ready_for_review #3513
  • May 19 20:59
    Naros edited #3513
  • May 19 20:59
    Naros synchronize #3513
  • May 19 20:58
    Naros closed #292
  • May 19 20:58
    Naros closed #3508
  • May 19 17:43
    Naros closed #3515
  • May 19 17:13
    Naros closed #3516
  • May 19 17:07
    Naros review_requested #3516
Chris Cranford
@Naros
Thoughts?
Now supposedly there is a way you can configure Oracle to support longer names, but I'll need to check to see if we can determine via SQL if that's toggled on and skip the log step.
The reason this is important is because during the mining session we get a row returned as operation type 255 (UNSUPPORTED).
We get no detail as to why or what the event is event about other than the SCN, the transaction id and the affected table name.
Gunnar Morling
@gunnarmorling
+1 for that warning
if we can issue it without too many false positives, that is
Chris Cranford
@Naros
I thought about issuing a warning during streaming when we encounter these UNSUPPORTED use cases but since we have so little to go on as to why; I'm not sure how helpful that would be other than "hey table xyz had an unsupported event detected"
Just doesn't seem all that helpful to me other than indicating we saw the event but we couldn't do anything about it.
Chris Cranford
@Naros

if we can issue it without too many false positives, that is

So the docs say when compatibility is set to 12.2 or higher, table/column names can be up to 128 bytes in length rather than 30 bytes.
We can check this compatibility setting via SQL but that doesn't seem to matter when looking at an Oracle 12 R2 db (my local instance is set to 12.2.0 compatibility by default).
People have reported that Streams, Xstreams, and LogMiner were not adjusted; only that users can create tables with names using the longer lengths.
I'll check Oracle 19 to be sure whether it exhibits the same limitation.

Chris Cranford
@Naros
It would appear both 12 and 19 have this limitation.
Gunnar Morling
@gunnarmorling
i see
Jiri Pechanec
@jpechane
@Naros I am in for the warning. Just the wording should be careful to imply there might be a problem not that there is a problem and asking user to verify
hkokay
@h_kokay_twitter
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:284)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:338)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:256)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1266480 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Can someone please tell me how to resolve this? This is coming from a Postgres source connector...Is there a configuration i can set?
Gunnar Morling
@gunnarmorling
hey @jpechane good morning!
I'll be 5 min late, will ping you
Jiri Pechanec
@jpechane
@gunnarmorling Good morning, ok
Gunnar Morling
@gunnarmorling
@jpechane ready :)
Gunnar Morling
@gunnarmorling
@jpechane any comments on debezium/debezium#2822
(apart from the merge conflicts)
objections to merging it?
Hossein Torabi
@blcksrx
Guys, can you help me on this? debezium/debezium#2823
I have no idea why the test failed
Gunnar Morling
@gunnarmorling
@Naros hey there
not much more missing for debezium/debezium#2817 right?
Chris Cranford
@Naros
@gunnarmorling Nope not that I am aware, I'll remove the logging of the values tomorrow when I'm back and then we can merge it.
Chris Cranford
@Naros
@gunnarmorling Went ahead and sent the commit for the fix since it was real quick and I had to deal with another change as well.
I'll follow-up on any other PR comments tomorrow morning when I'm back.
Gunnar Morling
@gunnarmorling
@Naros sounds good!
and sorry, didn't know you were out
hey @jpechane
can you remind me: when did we want to do the master -> main change?
after Alpha1 is out?
Jiri Pechanec
@jpechane
@gunnarmorling Hi, yes exactly
Gunnar Morling
@gunnarmorling
and alpha1 is planned for next week monday, correct, @jpechane?
Jiri Pechanec
@jpechane
@gunnarmorling Yes
Gunnar Morling
@gunnarmorling
ok, cool; I'll send out an email then to announce this change
only impact for folks really would be to adjust their own local names to point to main rather than master
Payam
@pysf
Hey everybody, I am receiving NullPointerException on MySQL connector, The issue blocked the whole process on my side. I am not sure if it is a misconfiguration or a bug. I would appreciate any comments on that.
https://issues.redhat.com/browse/DBZ-4166
Chris Cranford
@Naros
Hi @pysf you mention that another table works, does the other table have a _ in its name?
Payam
@pysf
@Naros Yes the other table name is cached_products.
Gunnar Morling
@gunnarmorling
@Naros hey; can you send a reminder to the ML about the chat room change scheduled for tomorrow (as a reply to your original one)
Chris Cranford
@Naros
@gunnarmorling Yep, was going to do that around mid-day but I can do it now if you'd like.
Hi @pysf my apologies I didn't see you added the full log to the issue; okay I see the problem; looks like the parsing error allows the connector to proceed due to database.history.skip.unparseable.ddl=true; so we need to fix the grammar bug you've illustrated, thanks!
That's why you get the NPE
I don't believe normally that's a problem during streaming but I don't think we intended there to be CREATE TABLE syntax problems during snapshot like we do streaming.
Gunnar Morling
@gunnarmorling
@Naros any time which suits you
thx!
Payam
@pysf
@Naros Thank you so much! Please let me know if you need logs on different levels(Trace, Debug) or other details.
I also attached the docker-compose.yml file to the issue.
Chris Cranford
@Naros
Thanks @pysf, I believe we have all we need from what you've provided but if that changes we'll let you know on the jira. Thanks for the report!
Gunnar Morling
@gunnarmorling
all, updated the roadmap a bit for Debezium 1.8: https://twitter.com/gunnarmorling/status/1450734810402537475
Anisha Mohanty
@ani-sha
Hey all, Is it for me, or is JIRA out of service?