kafkacat -C -b -t topic -o earliest -c X
kafkacat -C -b -t topic -p 0 -o X
kafkacat -C -b -t topic -p 0 -o offset -c X
@julius:mtx.liftm.de
edenhill: I'm curious what you think of https://github.com/jcaesar/kafkacat-static — should I try and get that merged into the main repo?
Hi folks,
I want to implement the High-Level consumer group feature in kafka cat.
This is my code kafkacat -b ${KAFKA_HOST}:${KAFKA_PORT} ${KAFKA_CAT_OPTS} -X security.protocol=SASL_SSL -X sasl.mechanisms=PLAIN -X sasl.username=${KAFKA_API_KEY} -X sasl.password=${KAFKA_API_SECRET} -X api.version.request=true -b $Instance -t $TOPIC
I, however, keep getting this error. % ERROR: Failed to subscribe to 0 topics: Local: Invalid argument or configuration
At first I though the cause was in that the topic I was testing with had one partition and it was failing because I was trying to add another consumer to the consumer group. I then tested with a topic with more partitions. The same was happening. What configuration am I doing wrong? Where am I going wrong?
Thank you.
Configuration property auto.commit.enable is deprecated: [**LEGACY PROPERTY:** This property is used by the simple legacy consumer only. When using the high-level KafkaConsumer, the global `enable.auto.commit` property must be used instead]. If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call `rd_kafka_offset_store()` to store an offset (optional). **NOTE:** There is currently no zookeeper integration, offsets will be written to broker or local file according to offset.store.method.
% Group mygroup2 rebalanced (memberid rdkafka-e0855e1f-26cf-406a-8f9f-ad4df088fba8): assigned: my-topic [0]
enable.auto.commit
. I am setting that one!
-d security,broker
for more info
Æ
).
Hi All,
I was digging into the documentation to find out a way to dump stats for per-topic CPU and RAM utilization, but I am only finding the ,,Per-Topic metric" described here:
https://docs.confluent.io/current/kafka/monitoring.html
Any suggestions how to extract this information/idea how to calculate them if You know the formula behind the listed ones?
Regards ;)
get_watermark_offsets
, and commit the partitions synchronously. That's working fine, but I'm finding tests that are unfortunately still consuming messages from previous tests. If anyone has any guidance in writing these kind of consumer-driven tests that avoid repeated messages and other race conditions, please do let me know :(