Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 16:24
    roldanbob review_requested #3514
  • 16:23
    Himanshu-LT opened #3516
  • 15:19
    Naros synchronize #3513
  • 14:56
    Naros review_requested #3493
  • 14:55
    Naros synchronize #3493
  • 13:24
    Naros review_requested #3513
  • 13:18
    Naros synchronize #3513
  • 08:49
    elirag opened #3515
  • 04:08
    roldanbob edited #3514
  • 04:07
    roldanbob edited #3514
  • 03:31
    Naros synchronize #3513
  • 02:50
    Naros review_requested #3513
  • 02:48
    Naros synchronize #3513
  • May 18 23:53
    roldanbob opened #3514
  • May 18 21:25
    Naros synchronize #3513
  • May 18 21:17
    Naros review_requested #3513
  • May 18 21:17
    Naros review_requested #3513
  • May 18 21:17
    Naros opened #3513
  • May 18 18:32
    connorszczepaniak-wk synchronize #3505
  • May 18 18:11
    Naros closed #3512
Chris Cranford
@Naros
There is a situation where event 7 is a LOB operation, such as the LOB selector event that tells me what the column is we're about to write LOB data to.
If the result-set doesn't have at least event 5 from the previous mining session in it, then LogMiner doesn't materialize the synthetic event 6 which is a dummy insert
I say "dummy" because there is this notion of a single insert of a LOB field combined with non-LOB fields can be a combination of an INSERT followed by the LOB selector/write events followed by a final INSERT.
The first insert gives me all the non-lob fields, the lob events are for the lobs, and the final insert gives me the state for LOB fields that are short enough that Oracle treats like VARCHAR2 values rather than using the LOB function calls to write the data.
And the connector reads all these rows and emits a single insert event by doing a combination of all the data available by these multiple rows; which have differing SCNs, etc.
So if I tried to start from 6, it would never show up, I would only get 7 since its not a synthetic event.
Chris Cranford
@Naros
Keep in mind, this only ever applies when LOB is enabled. When LOB isn't; we never re-mine.
Gunnar Morling
@gunnarmorling
ah, i see
ok, but in that light, we could limit the parsing to 6 and 7 during the re-mine step, right?
Chris Cranford
@Naros
Yes.
Chris Cranford
@Naros
@gunnarmorling @jpechane So I have a validation question I'd like to get your take on.
In testing a table setup under Oracle 12 R2 if a column name is more than 30 characters long, LogMiner isn't capable of providing us an event for the operation.
I'm wondering if we should consider adding a small validation/log step like we do for table replica identity in PG where we inspect the column name lengths and if any exceed 30 characters, we warn about it in the logs.
Thoughts?
Now supposedly there is a way you can configure Oracle to support longer names, but I'll need to check to see if we can determine via SQL if that's toggled on and skip the log step.
The reason this is important is because during the mining session we get a row returned as operation type 255 (UNSUPPORTED).
We get no detail as to why or what the event is event about other than the SCN, the transaction id and the affected table name.
Gunnar Morling
@gunnarmorling
+1 for that warning
if we can issue it without too many false positives, that is
Chris Cranford
@Naros
I thought about issuing a warning during streaming when we encounter these UNSUPPORTED use cases but since we have so little to go on as to why; I'm not sure how helpful that would be other than "hey table xyz had an unsupported event detected"
Just doesn't seem all that helpful to me other than indicating we saw the event but we couldn't do anything about it.
Chris Cranford
@Naros

if we can issue it without too many false positives, that is

So the docs say when compatibility is set to 12.2 or higher, table/column names can be up to 128 bytes in length rather than 30 bytes.
We can check this compatibility setting via SQL but that doesn't seem to matter when looking at an Oracle 12 R2 db (my local instance is set to 12.2.0 compatibility by default).
People have reported that Streams, Xstreams, and LogMiner were not adjusted; only that users can create tables with names using the longer lengths.
I'll check Oracle 19 to be sure whether it exhibits the same limitation.

Chris Cranford
@Naros
It would appear both 12 and 19 have this limitation.
Gunnar Morling
@gunnarmorling
i see
Jiri Pechanec
@jpechane
@Naros I am in for the warning. Just the wording should be careful to imply there might be a problem not that there is a problem and asking user to verify
hkokay
@h_kokay_twitter
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:284)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:338)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:256)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1266480 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Can someone please tell me how to resolve this? This is coming from a Postgres source connector...Is there a configuration i can set?
Gunnar Morling
@gunnarmorling
hey @jpechane good morning!
I'll be 5 min late, will ping you
Jiri Pechanec
@jpechane
@gunnarmorling Good morning, ok
Gunnar Morling
@gunnarmorling
@jpechane ready :)
Gunnar Morling
@gunnarmorling
@jpechane any comments on debezium/debezium#2822
(apart from the merge conflicts)
objections to merging it?
Hossein Torabi
@blcksrx
Guys, can you help me on this? debezium/debezium#2823
I have no idea why the test failed
Gunnar Morling
@gunnarmorling
@Naros hey there
not much more missing for debezium/debezium#2817 right?
Chris Cranford
@Naros
@gunnarmorling Nope not that I am aware, I'll remove the logging of the values tomorrow when I'm back and then we can merge it.
Chris Cranford
@Naros
@gunnarmorling Went ahead and sent the commit for the fix since it was real quick and I had to deal with another change as well.
I'll follow-up on any other PR comments tomorrow morning when I'm back.
Gunnar Morling
@gunnarmorling
@Naros sounds good!
and sorry, didn't know you were out
hey @jpechane
can you remind me: when did we want to do the master -> main change?
after Alpha1 is out?
Jiri Pechanec
@jpechane
@gunnarmorling Hi, yes exactly
Gunnar Morling
@gunnarmorling
and alpha1 is planned for next week monday, correct, @jpechane?
Jiri Pechanec
@jpechane
@gunnarmorling Yes
Gunnar Morling
@gunnarmorling
ok, cool; I'll send out an email then to announce this change
only impact for folks really would be to adjust their own local names to point to main rather than master
Payam
@pysf
Hey everybody, I am receiving NullPointerException on MySQL connector, The issue blocked the whole process on my side. I am not sure if it is a misconfiguration or a bug. I would appreciate any comments on that.
https://issues.redhat.com/browse/DBZ-4166