Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Magnus Edenhill
    @edenhill
    what exact broker version are you using?
    what kafkacat version?
    drocsid
    @drocsid
    Version 1.3.1 (JSON) (librdkafka 0.11.1 builtin.features=gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins)
    Hold on and I will try to find out the broker version....
    drocsid
    @drocsid
    Version 0.11.0.0 / Confluent 3.3.0
    Magnus Edenhill
    @edenhill
    okay, should be fine. Add -X api.version.request=true to the kafkacat cmdline just to be sure
    What's your producer?
    drocsid
    @drocsid
    kafkacat -X api.version.request=true -C -b some.kafka.cluster.net:9092 -t my-topic-name -p 0 -o -3 -e -f '%t[%p]@%o: %T\n' my-topic-name[0]@1141861: -1 my-topic-name[0]@1141862: -1 my-topic-name[0]@1141863: -1 % Reached end of topic my-topic-name [0] at offset 1141864: exiting
    Magnus Edenhill
    @edenhill
    If the broker's log.message.timestamp.type is CreateTime but your producer does not support the new 0.10 message format the timestamp will be -1
    So what are you using to produce the messages?
    drocsid
    @drocsid
    Hold on a second and I will get it. I didn't configure the producer...
    Magnus Edenhill
    @edenhill
    You can verify this by using kafkacat to produce messagse with -X api.version.request=true (the default) and =false - which will give you -1 timestamps
    drocsid
    @drocsid
    Thanks. I think you verified what I wanted to know. I was told that we had the timestamps but it doesn't look like they configured their producer. I'm trying to reconfigure our flink job, but can't work without the T.S.
    drocsid
    @drocsid
    Can the brokers append the timestamp to the message via configuration on their end, or does this only happen via producer configuration?
    Magnus Edenhill
    @edenhill
    Look at log.message.timestamp.type in the broker docs
    drocsid
    @drocsid
    Thanks Magnus!
    drocsid
    @drocsid
    This isn't a kafka discussion channel, but thought I might ask here:
    I have a kafka topic that gives the following output from kafka-run-class.sh kafka.tools.GetOffsetShell like
     my-influx-metrics:23:416690964
     my-influx-metrics:17:416664074
     my-influx-metrics:8:3589414675
     my-influx-metrics:26:416653834
     my-influx-metrics:11:3589607915
     my-influx-metrics:29:416646566
     my-influx-metrics:2:3589397358
     my-influx-metrics:20:416652119
     my-influx-metrics:5:3589357857
     my-influx-metrics:14:3589508738
     my-influx-metrics:4:3589409201
     my-influx-metrics:13:3589413578
     my-influx-metrics:22:416674758
     my-influx-metrics:31:416672287
     my-influx-metrics:7:3589303633
     my-influx-metrics:16:416682687
     my-influx-metrics:25:416701011
     my-influx-metrics:10:3589440227
     my-influx-metrics:1:3589436352
     my-influx-metrics:19:416647920
     my-influx-metrics:28:416691744
     my-influx-metrics:9:3589381056
     my-influx-metrics:18:416692614
     my-influx-metrics:27:416685074
     my-influx-metrics:3:3589258842
     my-influx-metrics:21:416666994
     my-influx-metrics:12:3589366011
     my-influx-metrics:30:416675839
     my-influx-metrics:15:3588767963
     my-influx-metrics:6:3589419648
     my-influx-metrics:24:416605993
     my-influx-metrics:0:3589417452
    Not sure if I can check the partition offsets like that using kafkacat. My first impression is that the topic partitions aren't balanced.
    When running kafkacat -o beginning -c 3 -f '%o\n' I get something like
    1306469514
    1306469515
    1306469516
    drocsid
    @drocsid
    When running kafkacat -o -3 -c 3 -f '%o\n' I get something like
    1352672313
    1352672314
    1352672315
    drocsid
    @drocsid
    Can I assume that I have about 1352672315-1306469514=46202801 messages?
    Also can I assume that a log compaction hasn't occurred?
    drocsid
    @drocsid
    Finally my highest offsets seem to be much larger than the offsets above. Is it normal that reading from the end produces a lower offset than the partition offset like we see?
    drocsid
    @drocsid
    Forgive me for my off topic banter. It did turn out that this had too do with re-partitioning and possibly the log retention policy.
    Anyhow RE: kafkacat in the README shows an example of querying multiple timestamps for a topic
    Query offset(s) by timestamp(s): $ kafkacat -b mybroker -Q -t mytopic:3:2389238523 mytopic2:0:18921841
    I will open a pull request to fix the README
    drocsid
    @drocsid
    #127
    ajayakumar-jayaraj
    @ajayakumar-jayaraj
    team , I am configuring kafka cluster on gke , how do i define bootstap-servers for consumer
    i am able to test it good with kafkacat ..
    ajayakumar-jayaraj
    @ajayakumar-jayaraj
    I am able to consume msg internally using --zookeeper flag but not bootstrap-server flag ,, got tuck on this, can somebody help on ths ..
    @edenhill any thoughts
    Oron Sharabi
    @oronsh
    Hello all! I have a question, If I want to read a topic from the end - offset, How shall I do that? Thanks a lot :)
    Magnus Edenhill
    @edenhill
    @oronsh -o latest
    Oron Sharabi
    @oronsh
    Thanks @edenhill
    Haruhiko Nishi
    @hanishi
    Hi, I am new to this room and I joined as I am facing an issue that kafkacat accessing a topic through a NodePort from outside Kubernetes cluster(actually it's minikube) does seem to consume any record. Is setting up for kafka NodePortnot sufficient?
    Magnus Edenhill
    @edenhill
    Haruhiko Nishi
    @hanishi
    @edenhill Thank you for the pointer. Kafkacat has become an indispensable tool when working with kafka topic, btw. Thank you !
    Magnus Edenhill
    @edenhill
    Glad to hear it!
    sharonsyra
    @Sharonsyra
    Hi all,
    I am trying to create a command that will enable me to see messages from offset m to n. I am able to get the first x messages.
    First - kafkacat -C -b -t topic -o earliest -c X
    Last - kafkacat -C -b -t topic -p 0 -o X
    Next - kafkacat -C -b -t topic -p 0 -o offset -c X
    Any ideas. I would appreciate the help.
    Magnus Edenhill
    @edenhill
    how about -o START_OFFSET -c END_OFFSET-START_OFFSET
    sharonsyra
    @Sharonsyra
    I will try that thank you. Yeah I was thinking something along that. Ensuring m is always present for option n. (m -> n) Then getting the difference between the two and using that for count value. My bad. Thank you for your time. :slight_smile:
    Yannick Koechlin
    @yannick
    there is no support for topic admin api (deleting topics) yet right ?
    Magnus Edenhill
    @edenhill
    @yannick Unfortunately no
    matrixbot
    @matrixbot
    @julius:mtx.liftm.de edenhill: I'm curious what you think of https://github.com/jcaesar/kafkacat-static — should I try and get that merged into the main repo?
    Magnus Edenhill
    @edenhill
    I'm not too fond of introducing and maintaining a new build system (meson). There's also some recent work in mklove and librdkafka (not yet merged to master) that will make static building easier
    matrixbot
    @matrixbot
    @julius:mtx.liftm.de kk. Not terribly difficult to maintain it outside until that makes it, so I'll just keep doing that.
    sharonsyra
    @Sharonsyra

    Hi folks,
    I want to implement the High-Level consumer group feature in kafka cat.
    This is my code kafkacat -b ${KAFKA_HOST}:${KAFKA_PORT} ${KAFKA_CAT_OPTS} -X security.protocol=SASL_SSL -X sasl.mechanisms=PLAIN -X sasl.username=${KAFKA_API_KEY} -X sasl.password=${KAFKA_API_SECRET} -X api.version.request=true -b $Instance -t $TOPIC

    I, however, keep getting this error. % ERROR: Failed to subscribe to 0 topics: Local: Invalid argument or configuration
    At first I though the cause was in that the topic I was testing with had one partition and it was failing because I was trying to add another consumer to the consumer group. I then tested with a topic with more partitions. The same was happening. What configuration am I doing wrong? Where am I going wrong?
    Thank you.

    sharonsyra
    @Sharonsyra
    I see where I was going wrong :smile:. It expects a list of topics. No need for the -t argument.