Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Marwan Ghalib
    @marwanghalib
    Also, @Naros , while I have you here. Wanted to confirm what is the format of the schema.whitelist identifier? I know for table.whitelist it is 'schemaName.tableName'
    Natacha Crooks
    @nacrooks

    @Chris: is there any chance, while you're here, that you could help me with my own transaction metadata question? Specifically, am I doing something wrong when specifying the KEY_CONVERTER/VALUE_CONVERTER Avro converter, I get this connect kafka error:

    connect_1 | Caused by: org.apache.avro.AvroRuntimeException: Not a record: ["null, {"type":"record","name":"ConnectDefault","namespace":"io.confluent.connect.avro","fields":[{"name":"id","type":"string

    "},{"name":"total_order","type":"long"},{"name":"data_collection_order","type":"long"}]}]

    connect_1 | at org.apache.avro.Schema.getFields(Schema.java:220)

    connect_1 | at org.apache.avro.data.RecordBuilderBase.<init>(RecordBuilderBase.java:54)

    connect_1 | at org.apache.avro.generic.GenericRecordBuilder.<init>(GenericRecordBuilder.java:38)

    connect_1 | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:600)

    connect_1 | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:606)

    connect_1 | at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)

    Chris Cranford
    @Naros
    @marwanghalib For schema.whitelist its simply just the schema name, e.g. schemaA,schemaB
    @nacrooks I'm not aware of there being any limitations with avro + schema registry in conjunction with the transaction metadata topic stream, but that's not to say there might not be a bug somewhere either.
    Natacha Crooks
    @nacrooks
    (I upgraded my message with the exact error)
    Chris Cranford
    @Naros
    @nacrooks Which connector are you using?
    Karthik
    @unarmedcivilian
    hello, i was wondering if its possible to verify that no messages were missed while reading from the WAL (postgresql).
    Natacha Crooks
    @nacrooks

    @Naros, these are the options that make it go wrong
    "key.converter": "io.confluent.connect.avro.AvroConverter",

    "value.converter": "io.confluent.connect.avro.AvroConverter",

    "key.converter.schema.registry.url": "http://schema-registry:8081",

    "value.converter.schema.registry.url": "http://schema-registry:8081"

    It works without these options
    Karthik
    @unarmedcivilian
    @unarmedcivilian what i mean is if its possible to verify by using LSN or any other metadata to ensure that all changes written to WAL have been processed.
    Chris Cranford
    @Naros
    @nacrooks What version of Debezium are you presently using, 1.1.0.Final?
    Natacha Crooks
    @nacrooks
    the docker image debezium/1.1
    Chris Cranford
    @Naros
    @nacrooks Give me a few to setup a test and see. I did verify that running the SourceRecord through the AvroConverter in our transaction metadata tests passes just fine, so I'm not entirely sure why there would be this incompatibility.
    Natacha Crooks
    @nacrooks
    @naros: Thank you so much! I am using FROM confluentinc/cp-schema-registry for my schema registry (this is the only non debezium image I'm using).
    Chris Cranford
    @Naros
    @nacrooks I can reproduce it, I'll log an issue for it and attach my docker files.
    Natacha Crooks
    @nacrooks
    Thank you @naros!
    Chris Cranford
    @Naros
    @nacrooks Opened https://issues.redhat.com/browse/DBZ-1915, we'll take a look at this asap.
    @nacrooks Feel free to update that issue with any other pertinent details you believe are valuable that I might have missed.
    Jeremy Finzel
    @jfinzel
    anyone able to get this datatype.propagate.source.type to work? I tried even with a brand new table - still no good
    oh........
    I think I see the issue
    Chris Cranford
    @Naros
    @jfinzel I believe the length/scale you're after should be in the parameters section of the struct
    Jeremy Finzel
    @jfinzel
    @Naros it is not showing up anywhere in the struct.
    I also thought I misconfigured it by only putting numeric. But I also tried now dbname.pg_catalog.numeric, and that still did not work.
    Chris Cranford
    @Naros
    @jfinzel Honestly we probably should have a test for this feature in every connector, it seems DBZ-1830 only contributed a test in the MySQL connector.
    That said, the regex used in that test appears to be .+\\.FLOAT as an example, perhaps you could try .+\\.NUMERIC?
    Jeremy Finzel
    @jfinzel
    oh sheesh - I requested it originally for the postgres one :)
    I will try
    is it case-sensitive??
    Chris Cranford
    @Naros
    No, it's a case in-sensitive predicate so it shouldn't matter.
    Jeremy Finzel
    @jfinzel
    I'm trying that now
    Chris Cranford
    @Naros
    And regarding postgres, the code was implemented in core fine, its just more the fact we should have test coverage across all connectors.
    That helps make sure we don't break anything or that when we introduce a new db version that things "just work".
    It also helps identify any nuances per-connector that should likely be documented as such should they exist.
    And lastly it gives a simple example of how to use the feature in tests for users and for devs who may not have otherwised worked on the feature too.
    Jeremy Finzel
    @jfinzel
    @Naros it worked!
    using .+\\.NUMERIC
    Chris Cranford
    @Naros
    Awesome, if you believe the docs need some adjustments on this concern feel free to update https://issues.redhat.com/browse/DBZ-1916 so we can get this fully clarified to avoid confusion.
    Jeremy Finzel
    @jfinzel
    @Naros thanks I will consider it
    this feature is a big help for us
    Chris Cranford
    @Naros
    @jfinzel I'm just glad we found how to make it work for you.
    Jeremy Finzel
    @jfinzel
    Is it typical to get 500 timeouts when issuing REST calls to kafka connect? I am routinely having to re-try
    It could be our load balancer
    Chris Cranford
    @Naros
    Not that I am aware but sounds like some infrastructure issue most likely.
    Jeremy Finzel
    @jfinzel
    @Naros I am still not sure why my regex did not work. I just tried lowercase \+\\.numeric. I can't understand though why load_test_dev.pg_catalog.numeric does not work.... maybe another test needed?
    sorry - to clarify - lowercase DID work
    but my original fully qualified did not work
    FWIW, I don't care much actually because .+\\.numeric is dryer
    Chris Cranford
    @Naros
    You probably need to use load_test_dev\\.pg_catalog\\.numeric since . is a regex reserved symbol?
    That's a hunch but I haven't looked at the code impl to know for sure.
    Yosafat Vincent Saragih
    @Yosafat1997
    I have this error while run my workers [2020-03-31 21:56:19,107] INFO For table 'public.xxx' using select statement: 'SELECT * FROM "public"."xxx"' (io.debezium.relational.RelationalSnapshotChangeEventSource:287) [2020-03-31 21:56:19,290] INFO WorkerSourceTask{id=pg_dev_xxx-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:416) [2020-03-31 21:56:19,290] ERROR Invalid call to OffsetStorageWriter flush() while already flushing, the framework should not allow this (org.apache.kafka.connect.storage.OffsetStorageWriter:109) [2020-03-31 21:56:19,291] ERROR WorkerSourceTask{id=pg_dev_xxx-0} Unhandled exception when committing: (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:119) org.apache.kafka.connect.errors.ConnectException: OffsetStorageWriter is already flushing at org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111) at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:428) at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.commit(SourceTaskOffsetCommitter.java:111) at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.access$000(SourceTaskOffsetCommitter.java:46) at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter$1.run(SourceTaskOffsetCommitter.java:84) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) Does anyone know this problem. After this problem, it send me error like this: [2020-03-31 21:58:45,982] ERROR WorkerSourceTask{id=pg_dev_xxx-0} Failed to flush, timed out while waiting for producer to flush outstanding 2 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:438) [2020-03-31 21:58:45,982] INFO [Producer clientId=connector-producer-pg_dev_xxx-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1183)
    Yosafat Vincent Saragih
    @Yosafat1997
    and after that i got multiple error like this:
     [2020-04-01 10:23:14,054] ERROR WorkerSourceTask{id=pg_dev_xxx-0} failed to send record to pj_partner_upload_log: (org.apache.kafka.connect.runtime.WorkerSourceTask:347)
    org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept
    but there's no more producer max.byte.error. Is the error come from kafka consumer or max_message_byte?