Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 23 08:30

    edenhill on master

    Default of linger.ms mentioned … (compare)

  • May 23 08:30
    edenhill closed #3849
  • May 23 08:28

    edenhill on master

    Test 0103: AK < 2.5 needs sleep… (compare)

  • May 23 08:28

    edenhill on test-0103-old-kafkas

    (compare)

  • May 23 08:28
    edenhill closed #3850
  • May 20 20:33
    wmorgan6796 commented #3798
  • May 20 10:09
    Animesh-ctrl opened #3854
  • May 19 12:10
    espakm edited #3853
  • May 19 12:09
    espakm commented #3852
  • May 19 12:09
    espakm opened #3853
  • May 19 12:06
    espakm opened #3852
  • May 18 14:58
    emasab synchronize #3850
  • May 18 14:58

    emasab on test-0103-old-kafkas

    Test 0103: AK < 2.5 needs sleep… (compare)

  • May 18 14:55
    emasab synchronize #3850
  • May 18 14:55

    emasab on test-0103-old-kafkas

    increased to 5 seconds (compare)

  • May 18 12:42
    nickwb opened #3851
  • May 18 07:08
    emasab edited #3850
  • May 18 07:07
    emasab review_requested #3850
  • May 18 07:07
    emasab opened #3850
  • May 18 06:57

    emasab on test-0103-old-kafkas

    fix: style (compare)

ramsanka
@ramsanka
Is there any utility to flush messages in the kafka broker particulary if we are hitting kafka lag. System is stuck unable to move as there are far to many messages in the broker.
juan
@_danielejuan_twitter
Hi edenhill, we are trying to implement at most once consumer by doing commitSync every message consumed. We would like to do this rather than using the auto commit interval because we wouldn't want to re-consume messages that were consumed if a crash occurs. On our testing with 3 brokers, 2 rf and 6 partitions we are getting ~100ms from doing commitSync. Do you have recommendations on how to improve this latency?
manoj2git
@manoj2git

Hi , what configuartion setting at producer end is needed for highthroughput .I have written producer using C-API but not getting targeted throughput.

Written producer cycle (construct record,produce record to queue,poll after queing batch.num.messages record for delivery status),finally flush the queue with 10 sec wait.
with producer setting (batch.num.messages=40,idempotence,with delivery and event call back),msg size-4K.

achieved throughput:-200 msg/sec (.8MB/Sec),is this OK or can be enhanced?

observation:--Queueing rate is 2 times higer then delivery to broker.(1000 msg queued,only < 500 delivered) and some of the message (200) being lost not delivered.
also when increasing the batch >40 client is reflecting error timeout.

kindly advise correct setting for high throughput also enlighten about poll
Is non blocking poll is necessay after each send or at interval or can be after enqueueing all records.How it will behave at each point?

Martin Jenkins
@_mlvj__twitter

@edenhill Hi! I have a working (on Ubuntu 20.04) librdkafka installation, which also works on one RedHat box, but not another one, where so far I have failed to spot the difference.
The problem on the non-working one is what seems to be the classic

RDKAFKA -181: sasl_ssl://b-2.kafka-<sensitivestuff>.amazonaws.com:9096/bootstrap: Failed to verify broker certificate: certificate signature failure

...but it works just fine on another machine - and I have not installed certificates.

I tried to turn off certificate validation, but on both boxes I get

Error configuring 'enable.ssl.certificate.verification' to 'false': No such configuration property: \"enable.ssl.certificate.verification\"

Note that on the RedHat box, I think 6.10, the version of librdkafka is 0.11.5, and the version of OpenSSL is "librdkafka built with OpenSSL version 0x100020bf "

SO

  1. firstly, do you know why I can't configure certificate verification to disabled? Is it the version of librdkafka, or the version of openssl? If librdkafka, what version is required?
  2. any ideas about what to look for, to see how this is working on one machine, and not another? I have set log level to maximum (all), but can't see anything particularly obvious
Martin Jenkins
@_mlvj__twitter

@edenhill Here are the logs... still pointing to one having a cert issue, with the other not, and both deployed automatically

Working

RDKAFKA-7-CONNECT: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: Connected to ipv4#x.x.x.x:9096
RDKAFKA-7-SSLVERIFY: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: Broker SSL certificate verified
RDKAFKA-7-CONNECTED: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: Connected (#1)

Not working

RDKAFKA-7-CONNECT: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: Connected to ipv4#x.x.x.x:9096
RDKAFKA-7-BROKERFAIL: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: failed: err: Local: SSL error: (errno: Success)
RDKAFKA-7-STATE: rdkafka#producer-1: [thrd:sasl_ssl://kafka-server.]: sasl_ssl://kafka-server.amazonaws.com:9096/bootstrap: Broker changed state CONNECT -> DOWN

Shiva Shankar D
@dshivashankar_twitter

Hi, I have a producer that is continuously producing messages onto a topic of single partition. When I start a consumer with latest offset how can I determine from which point I'll start getting the messages from the consumer ?

I can't be sure as soon as the subscribe call is made the messages then produced won't be included because only after first poll that internally triggers the rebalance callback the actual assigning of offsets will take place. But if I override the RebalanceCallback and add an event after consumer->assign(), can I be sure that records created after this point will be read by my consumer ?

pthalasta
@pthalasta
how do we set sasl.login.callback.handler.class with librdkafka? i do not see that in the configuration.md file. I have a custom authentication module and need to set this property in order to connect to my broker
Yuval Lifshitz
@yuvalif
Hi,
Is there a way to redirect the internal logs from stdout to my own logging mechanism?
Tim Fox
@purplefox
Is this the right channel for questions about the Confluent Go Kafka client (which uses librdkafka) ?
I am having an issue with Producer.flush() hanging and reporting unflushed messages. But the messages were actually sent, as they were consumed by a consumer.
sarkanyi
@sarkanyi
Hi, @edenhill when do you plan to merge edenhill/librdkafka#2409 ? Please let me know if you want me to do a rebase before that.
Sanjay Patel
@San_j_ay_twitter
why I am getting this error? kafka: error setting librdkafka config property; name='topic.partitioner', value='murmur2_random', error='No such configuration property: "partitioner"'
I have not set that anywhere in my config file, is it hard coded somewhere? I am trying to use kafka-c with syslog-ng
Sean Hanson
@shanson7
Are we allowed to call KafkaConsumer::committed during a rebalance callback? We have done this for a long time with non-incremental rebalance and it seems to work, but while trying to add support for incremental rebalance strategies it seems that the call consistently times out when getting an assign with 0 partitions.
ramsanka
@ramsanka
rd_kafka_consumer_poll does it drain all the messages in the queue at one go ?
Shiva Shankar D
@dshivashankar_twitter
Hi, is it always safe to ignore the re-tryable exceptions during the poll of consumer ?
JD
@write2jaydeep
hi guys, I am having problem with producer, ProduceRequest failed: Local: Timed out in queue: explicit actions Retry,MsgNotPersisted. reported here edenhill/librdkafka#3564 I guess it may be some configuration problem
Sabyasachi Sengupta
@sabyasg
Hi.. we're writing an rdkafka application that needs batch processing asynchronously. Wondering if there's a way to use the highest performing consume callback with batching enabled?
Gagandeep kalra
@gagandeepkalra

Hi @edenhill , need your help answering this-

For our scenario, we want to consume from topic using assign instead of subscribe. As per your comment here- https://github.com/confluentinc/confluent-kafka-go/issues/534#issuecomment-698290136

looks like we'd still need FindCoordinator permission. Is there a way we can go about without this access?

For reference apparently, the java client doesn't require this.

Similar question, would the FindCoordinator call still go if we don't specify the either group.id or set auto.commit to false?

Please let me know if I should open an issue instead. Thanks.

ˈt͡sɛːzaɐ̯
@julius:mtx.liftm.de
[m]

I recently noticed that I can repeatedly query the high watermark of all partitions of all topics with rd_kafka_get_watermark_offsets to answer the most basic question I usually have about a Kafka cluster: "How many messages are arriving where per second". I've thrown together a little tool that does this.

I'd like to ask: Is this a dumb idea? Will it possibly put too much strain on the broker's metadata interface? And also, has this been done before? Maybe there's some nice TUI tool like k9s for Kubernetes, but for Kafka?

Roy Prager
@roy-prager
hi guys. trying to build my project with librdkafka 1.4.2on my new Apple M1 machine using node-rdkafka, but getting Error: dlopen(/node_modules/node-rdkafka/build/Release/node-librdkafka.node, 0x0001): tried: '/node_modules/node-rdkafka/build/Release/node-librdkafka.node' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/node-librdkafka.node' (no such file), '/usr/lib/node-librdkafka.node' (no such file), are you familiar with this kind of error?
Krishnan Mani
@krishnan-mani_gitlab
Is there a list of available high-level language bindings for librdkafka?
arvind Garg
@er.arvindgarg_gitlab
Hi there.. I am trying to compile librdkafka version 1.4.4 on Windows cygwin and I am getting error related to use of TIME_UTC. any flag which I need to use while compiling/configuring the same ?
pthalasta
@pthalasta
Are there any instructions on how to install confluent-kafka-python on Mac M1? I see the following error when in install with pip
from confluent_kafka.cimpl import Consumer as _ConsumerImpl ImportError: dlopen(/opt/homebrew/lib/python3.9/site-packages/confluent_kafka/cimpl.cpython-39-darwin.so, 0x0002): tried: '/opt/homebrew/lib/python3.9/site-packages/confluent_kafka/cimpl.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cimpl.cpython-39-darwin.so' (no such file), '/usr/lib/cimpl.cpython-39-darwin.so' (no such file) I installed librdkafka using this cmd arch -arm64 /opt/homebrew/bin/brew install librdkafka
Abhijeet Singh Bhadouria
@abbey007

Hi @edenhill i am trying to build image with FROM node:14.17.3 but rdkafka is giving segmentation fault error. I have used
this is my docker file
FROM node:14.17.3

RUN npm cache clean --force
RUN apt-get update
RUN apt install -y liblz4-dev
RUN apt install -y libsasl2-dev
RUN apt install -y libssl-dev
RUN apt-get install -y ca-certificates curl gnupg \
g++ make musl-dev \
python3 unzip wget \
git

RUN git clone https://github.com/edenhill/librdkafka.git && \
cd librdkafka && \
./configure --install-deps && \
make && \
make install

ENV BUILD_LIBRDKAFKA=0
ENV LD_LIBRARY_PATH=/usr/local/lib

RUN mkdir -p /home/node && chown -R node:node /home/node
ENV NODE_ENV=local
WORKDIR /home/node
COPY ["package.json", "./"]
RUN npm install husky -g
RUN npm install -g typescript
RUN npm install -g ts-node
RUN npm install
COPY . .
EXPOSE 4003
RUN chown -R node /home/node
USER node
RUN rm -rf dist
RUN tsc
CMD node dist/bin/server.js

Subashini C V
@SubashiniCV1_twitter

@edenhill,
we are using the librdkafka client 1.5.0 with the
producer program.
We are trying to use the apache Kafka server 1.1
as consumer.
But we could not able to receive any messages on the
kafka server.

Is this versions compatible?
Where can I check the compatibility of the
client & server versions?

Can you please suggest?

Thanks in advance,
Suba

arvind Garg
@er.arvindgarg_gitlab

Hi All, I am trying to build Kafka release version 1.4.4 on windows and I have VS2017 installed. I am getting some errors related to NuGet packages. Any idea how to fix this?
The project "interceptor_test" is not selected for building in solution configuration "Debug|x64".
Project "C:\VTE\agent_bld_kafka\librdkafka-1.4.4\win32\librdkafka.sln" (1) is building "C:_bld_kafka\librdkafka-1.4.4\win32\librdkafka.vcxproj" (2) on node 1 (default targets).
C:_bld_kafka\librdkafka-1.4.4\win32\librdkafka.vcxproj(257,5): error : This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is packages\zlib.v141.windesktop.msvcstl.dyn.rt-dyn.1.2.8.8\build\native\zlib.v141.windesktop.msvcstl.dyn.rt-dyn.targets.
Done Building Project "C:_bld_kafka\librdkafka-1.4.4\win32\librdkafka.vcxproj" (default targets) -- FAILED.
Done Building Project "C:_bld_kafka\librdkafka-1.4.4\win32\librdkafka.sln" (default targets) -- FAILED.

Build FAILED.

Ed Sinek
@esinek
I'm using the golant confluent client as a consumer and in my read message loop (with 1 second timeout), I'm tyring to catch the following Disconnected state but it is not recognized as a kafka.Error. I want to use kafka.Error to see if this is retryble so I don't have to close/reopen the connection and subscribe to this topic again.
Disconnected (after 16603ms in state UP)
github.com/confluentinc/confluent-kafka-go v1.7.0
Zhicheng Ryan Liang
@ryanliang
Hi @edenhill , do you know when version 1.9.0 will be released?
anant158@webkul.com
@anantmaks

I don't know if this is the right section to comment on, but I am trying to use php-rdkafka library (https://arnaud.le-blanc.net/php-rdkafka-doc/phpdoc/index.html) to connect to the Azure event hub as per this solution (https://github.com/Azure/azure-event-hubs-for-kafka/issues/51).
My code for the producer is:

try {
    $conf = new RdKafka\Conf();
    $conf->set('group.id', '$Default');
    $conf->set('bootstrap.servers', 'NAMESPACE.servicebus.windows.net:9093');
    $conf->set('security.protocol', 'SASL_SSL');
    $conf->set('sasl.mechanisms', 'PLAIN');
    $conf->set('sasl.username', '$ConnectionString'); 
    $conf->set('sasl.password', 'Endpoint=sb://NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=syn-wk;SharedAccessKey=7XXX9/8XXXE=;EntityPath=EVENTHUBNAME');
    $conf->set('api.version.request', 'false');
    $conf->set('ssl.ca.location', '/home/users/user/www/html/azurekafka/php-rdkafka-6.x/cacert.pem');
    $conf->set('log_level', (string) LOG_DEBUG);
    $conf->set('debug', 'all');
    $conf->set('auto.offset.reset', 'earliest');
    $conf->set('enable.partition.eof', 'true');

    $producer = new RdKafka\Producer($conf);
    $producer->addBrokers("syndigo-wk.servicebus.windows.net:9093");
    $topic = $producer->newTopic("test");
    for ($i = 0; $i < 5; $i++) {
        $topic->produce(RD_KAFKA_PARTITION_UA, 0, "Message $i");
        $producer->poll(0);
    }

    for ($flushRetries = 0; $flushRetries < 5; $flushRetries++) {
        $result = $producer->flush(10000);
        if (RD_KAFKA_RESP_ERR_NO_ERROR === $result) {
            break;
        }
    }
    if (RD_KAFKA_RESP_ERR_NO_ERROR !== $result) {
        throw new \RuntimeException('Was unable to flush, messages might be lost!');
    }
} catch (Exception $e) {
    var_dump($e->getMessage());
}

and the consumer is:

<?php
try {
    $conf = new RdKafka\Conf();
    $conf->set('group.id', '$Default');
    $conf->set('bootstrap.servers', 'NAMESPACE.servicebus.windows.net:9093');
    $conf->set('security.protocol', 'SASL_SSL');
    $conf->set('sasl.mechanisms', 'PLAIN');
    $conf->set('sasl.username', '$ConnectionString'); 
    $conf->set('sasl.password', 'Endpoint=sb://NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=syn-wk;SharedAccessKey=7XXX9/8XXXE=;EntityPath=EVENTHUBNAME');
    $conf->set('api.version.request', 'false');
    $conf->set('ssl.ca.location', '/home/users/user/www/html/azurekafka/php-rdkafka-6.x/cacert.pem');
    $conf->set('log_level', (string) LOG_DEBUG);
    $conf->set('debug', 'all');
    $conf->set('auto.offset.reset', 'earliest');
    $conf->set('enable.partition.eof', 'true');

    $rk = new RdKafka\Consumer($conf);
    $rk->addBrokers("syndigo-wk.servicebus.windows.net:9093");
    $topicConf = new RdKafka\TopicConf();
    $topicConf->set('auto.commit.interval.ms', 100);
    $topicConf->set('offset.store.method', 'broker');
    $topicConf->set('auto.offset.reset', 'earliest');
    $topic = $rk->newTopic("test", $topicConf);

    $topic->consumeStart(0, RD_KAFKA_OFFSET_STORED);
} catch (Exception $e) {
    var_dump($e);die;
}
try{
    while (true) {
        $message = $topic->consume(0, 10000);
        switch ($message->err) {
            case RD_KAFKA_RESP_ERR_NO_ERROR:
                break;
            case RD_KAFKA_RESP_ERR__PARTITION_EOF:
                echo "No more messages; will wait for more\n";
                $LoggerOb->putLog("AUFW: No more messages; will wait for more");
                break;
            case RD_KAFKA_RESP_ERR__TIMED_OUT:
                echo "Timed out\n";
                $LoggerOb->putLog("AUFW: Timed Out");
                break;
            default:
                throw new \Exception($message->errstr(), $message->err);
                break;
        }

    }
} catch (Exception $e) {
       var_dump($e->getMessage());
}

I am unable to read the produced messages in the consumer. It outputs as null over the browser. Sharing log in the reply of this comment, that is generated while running this file on terminal

4 replies
Nikos Kostoulas
@nkostoulas
hey! Why do RD_KAFKA_EVENT_OAUTHBEARER_TOKEN_REFRESH events contain sasl.oauthbearer.config?
Kanthi
@subkanthi
Is there a way to get the broker state(if its UP ,running or reachable), rd_kafka_subscription doesnt throw an error if the broker is not reachable
Kanthi
@subkanthi
I understand most of the operations are asynchronous, but is there a blocking call to read the Broker state, we have a client/server implementation and trying to show useful errors to the user when the broker is not reachable
Sebastien Armand
@khepin

:wave: I'm using php-rd-kafka which is a librdkafka wrapper for PHP. The wrapper does not change the semantics of librdkafka from what I understand.
Now, what I need to do is upon receiving an HTTP request, perform some work (talk to other services), gather some data and push that to a kafka topic.

I noticed the latency on producing and flushing a message was quite high. Kafka is hosted far away from my app and a roundtrip latency around 45ms is to be expected
but on the first call to produce + flush, I'm getting 300ms.

If I add a subsequent call to produce + flush, now we're around 50ms.

$conf = new RdKafka\Conf();
$rk = new RdKafka\Producer($conf);
$topic = $rk->newTopic("test");

dump_time(function () use ($topic, $rk) {
    $topic->produce(RD_KAFKA_PARTITION_UA, 0, "message");
    $rk->flush(1000);
}); // ~300ms
dump_time(function () use ($topic, $rk) {
    $topic->produce(RD_KAFKA_PARTITION_UA, 0, "message");
    $rk->flush(1000);
}); // ~50ms

I also noted that if I first called ->produce() (without calling flush), and sleep(1), then the second call to
produce + flush also benefits from very low latency.

And since that first call to produce is non blocking, I started thinking, I could make that happen very early on, before my program
starts talking to other services etc... so that when I'm ready to produce my actual message, latency would be very low and the additional 200ms of waiting
would have happened in a separate thread, making my response time 200ms faster.

The problem is though that now I'm sending useless messages to my topic just for the sake of ensuring that all the topic metadata is
readily available for publishing the next one.

So I was wondering if anything is available in librdkafka that would have the same effects without requiring me to produce a message but that would still help me gain the 200 or so ms.

I've tried $rk->poll(0) but no effect there.

ˈt͡sɛːzaɐ̯
@julius:mtx.liftm.de
[m]
I wonder if you could get the same effect by requesting the metadata. And also what your latency would be if you used fixed partitioning.
(But I so far only fought with consumer startup / rebalancing. So no idea.)
Sebastien Armand
@khepin
metadata request in the PHP extension is a synchronous operation, so even if that helped, the same overall time would be spent.
I'm not sure what you mean by fixed partitioning though
ˈt͡sɛːzaɐ̯
@julius:mtx.liftm.de
[m]
Iirc, it needs to request the metadata to be able to decide a partition for the message - it's hash(key) % topic.n_partitons after all. I'm not quite sure, but if you just specify "This message belongs to partition 42", it might skip that. (Heh, the term "fixed partitioning" even appears in the docs. https://arnaud.le-blanc.net/php-rdkafka-doc/phpdoc/rdkafka-producertopic.produce.html I thought I just made that up.)
(In general though, I wonder. Doesn't PHP (or resp whatever is invoking PHP - is fastcgi still a thing?) have some way of pooling the kafka connection over multiple requests?)
Sebastien Armand
@khepin
no, it does not.
Some extensions make something like this possible (pdo/mysql, memcached, redis), but the kafka one currently does not permit this. There's an open issue on the repo with folks chatting about that which would definitely solve this issue if it were possible.
issue: arnaud-lb/php-rdkafka#42
Abuntxa
@Abuntxa
This message was deleted
Abuntxa
@Abuntxa

I am using the Kafka.net client (https://github.com/confluentinc/confluent-kafka-dotnet) which under the hood uses librdkafka. We are performing some load tests and we are experiencing an over use of threads of the librdkafka library. Our process contains 2 consumer instances and a single producer pointing to the same broker.
When we analyzed a memory dump we found that the librdkafka library was using 36 threads (which seems too much for the amount of high level objects).

23 threads [stats]: 24 28 30 31 32 33 34 35 36 38 ...
00007fff9eee0544 ntdll!NtWaitForMultipleObjects+0x14
00007fff61b4d59e ntdll!NtWaitForMultipleObjects+0x14
00007fff61b4d48e KERNELBASE!WaitForMultipleObjectsEx+Oxfe
00007fff528b0b64 KERNELBA5E!WaitForMultipleObjects+0xe
00007fff528b104b librdkafka!rd_kafka_error_txn_requires_abort+0x6694
00007fff52827f9e librdkafka!rd_kafka_error_txn_requires_abort+Ox6b7b
00007fff52827cf9 librdkafka!rd_kafka_topic_partition_list_sort+Ox6eae
00007fff527e1d45 librdkafka!rd_kafka_topic_partition_list_sort+Ox6c09
00007fff527dc3f7 librdkafka!rd_kafka_wait_destroyed+Oxa7b5
00007fffS27e3bba librdkafka!rd_kafka_wait_destroyed+Ox4e67
00007fff528b0c5b librdkafka! rd_kafka_wait_destroyed+Oxc62a
00007fff61e57974 librdkafka!rd_kafka_error_txn_requires_abort+Ox678b
00007fff9ee9a2f1 kerne132!BaseThreadInitThunk+0x14 ntdll!RtlUserihreadStart+ex21

3 threads [stats]: 22 26 63 
00007fff9eee0544 ntdll!NtWaitForMultipleObjects+0x14 
00007fff61b4d59e KERNELBASE!WaitForMultipleObjectsEx+Oxfe 
00007fff61b4d48e KERNELBASE!WaitForMultipleObjects+Oxe 
00007fff528b0b64 librdkafka!rd_kafka_error_txn_requires_abort+0x6694 
00007fff528b104b librdkafkaIrd_kafka_error_txn_requires_abort+Ox6b7b 
00007fff52828605 librdkafkaIrd_kafka_topic_partition_list_sort+Ox7515 
00007fff527d724e librdkafkaIrd_kafka_thread_cnt+Ox25e 
00007fff528b0c5b librdkafkaIrd_kafka_error_txn_requires_abort+0x678b 
00007fff61e57974 kerne132!BaseThreadInitThunk+0x14 
00007fff9ee9a2f1 ntdll!RtlUserThreadStart+0x21 

4 threads [stats]: 21 25 41 46
00007fff9eedfa74 ntdll!NtWaitFor5ingleObject+0x14 
00007fff5e4ea81d mswsock!SockWaitForSingleObject+0x1bd 
00007fff5e500705 mswsock!MSAFD_WSPPo11+0x369 
00007fff5e4f7f71 mswsock!WSPIoctl+Oxf4e1 
00007fff5fdb5cca ws2_321W5AIoct1+0x19a 
00007fff5fde0c7c ws2_32!W5APo11+Oxlec 
00007fff52857101 librdkafkaIrd_kafka_topic_partition_available+Ox2d81 
00007fff52856e73 librdkafkaIrd_kafka_topic_partition_available+Ox2af3 
00007fff527e1d12 librdkafkaIrd_kafka_wait_destroyed+Oxa782 
00007fff527dc3f7 librdkafkaIrd_kafka_wait_destroyed+Ox4e67 
00007fff527e3e7e librdkafkaIrd_kafka_wait_destroyed+Oxaee 
00007fff528b0c5b librdkafkaIrd_kafka_error_txn_requires_abort+Ox678b 
00007fff61e57974 kerne132!BaseThreadInitThunk+0x14 
00007fff9ee9a2f1 ntdll!RtlUserThreadStart+0x21 

2 threads [stats]: 23 27 
00007fff9eee0S44 ntdll!NtWaitForMultipleObjects+0x14 
00007fff61b4dS9e KERNELBASE!WaitForMultipleObjectsEx+Oxfe 
00007fff61b4d48e KERNELBASE!WaitForMultipleObjects+Oxe 
00007fff528b0b64 librdkafka!rd_kafka_error_txn_requires_abort+0x6694 
00007fff528b104b librdkafka!rd_kafka_error_txn_requires_abort+Ox6b7b 
00007fff52827f9e librdkafka!rd_kafka_topic_partition_list_sort+0x6eae 
00007fff52827cf9 librdkafka!rd_kafka_topic_partition_list_sort+0x6c09 
00007fff527e1d45 librdkafka!rd_kafka_wait_destroyed+Oxa7b5 
00007fff527dfeb6 librdkafka! rd_kafka_wait_destroyed+0x8926 
00007fff527e3b8d librdkafka!rd_kafka_wait_destroyed+OxcSfd 
00007fff528b0c5b librdkafka!rd_kafka_error_txn_requires_abort+Ox678b 
00007fff61e57974 kerne132!BaseThreadInitThunk+0x14 
00007fff9ee9a2f1 ntdll!RtlUserThreadStart+0x21 

There were a couple more threads one of which had a managed producer stack

Does anyone know what might be causing this issue? We make a quite intensive high level polling (once every 100 ms or so) until reaching a maximum number of working messages per partition when we pause the partitions (but we continue with the high level polling as we want to minimize the latency once a message is delivered to the client).

1 reply
Phil Nelson
@phil-nelson
g'day, I am seeing that kinit is being run continuously in one of my environments where I'm running consumer processes, using librdkafka 1.7.0. What might be causing this? I haven't configured sasl.kerberos.min.time.before.relogin so I thought the default 60s should be used. Any pointers? Consumers seem to be otherwise communicating with kafka fine, consuming messages, albeit a bit slower than usual because of the high load which I think it caused by all of the kinit commands
Simon Shanks
@simonshanks_twitter
Is it possible to add a new topic subscription at runtime? this suggest its not possible segmentio/kafka-go#613 Is it only possible to call 'rd_kafka_subscribe' once with multiple topics (shouldnt call rd_kafka_subscribe again with new topic)
Simon Shanks
@simonshanks_twitter
subscribing to 1 topic (reset set to 'earliest', commits off) , then later adding/subscribing to another topic causes the first topic to replay all messages again (presumbly cos the rebalance happened again) .... which I assume means you shouldnt do additional subs at runtime (?)
Adrian Costin
@adriancostin6
hello, I keep getting these weird linker errors on linux related to curl. compiles just fine on windows though
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:163: undefined reference to curl_easy_perform' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:167: undefined reference tocurl_easy_getinfo'
/usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in function rd_http_req_get_content_type': /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:184: undefined reference tocurl_easy_getinfo'
/usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in function rd_http_post_expect_json': /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:316: undefined reference tocurl_easy_setopt'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:317: undefined reference to curl_easy_setopt' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:319: undefined reference tocurl_easy_setopt'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:321: undefined reference to curl_easy_setopt' /usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in functionrd_http_global_init':
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:443: undefined reference to curl_global_init' /usr/bin/ld: ../src/librdkafka.a(rdkafka_sasl_oauthbearer_oidc.c.o): in functionrd_kafka_oidc_build_headers':
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:116: undefined reference to curl_slist_append' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:117: undefined reference tocurl_slist_append'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:119: undefined reference to curl_slist_append' /usr/bin/ld: ../src/librdkafka.a(rdkafka_sasl_oauthbearer_oidc.c.o): in functionrd_kafka_oidc_token_refresh_cb':
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:382: undefined reference to curl_slist_free_all' collect2: error: ld returned 1 exit status make[4]: *** [_deps/librdkafka-build/examples/CMakeFiles/producer.dir/build.make:103: _deps/librdkafka-build/examples/producer] Error 1 make[3]: *** [CMakeFiles/Makefile2:928: _deps/librdkafka-build/examples/CMakeFiles/producer.dir/all] Error 2/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:163: undefined reference tocurl_easy_perform'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:167: undefined reference to curl_easy_getinfo' /usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in functionrd_http_req_get_content_type':
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:184: undefined reference to curl_easy_getinfo' /usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in functionrd_http_post_expect_json':
/home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:316: undefined reference to curl_easy_setopt' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:317: undefined reference tocurl_easy_setopt'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:319: undefined reference to curl_easy_setopt' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:321: undefined reference tocurl_easy_setopt'
/usr/bin/ld: ../src/librdkafka.a(rdhttp.c.o): in function rd_http_global_init': /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdhttp.c:443: undefined reference tocurl_global_init'
/usr/bin/ld: ../src/librdkafka.a(rdkafka_sasl_oauthbearer_oidc.c.o): in function rd_kafka_oidc_build_headers': /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:116: undefined reference tocurl_slist_append'
/usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:117: undefined reference to curl_slist_append' /usr/bin/ld: /home/adrian/repos/spoofy/build/_deps/librdkafka-src/src/rdkafka_sasl_oauthbearer_oidc.c:119: undefined reference tocurl_slist_append'
/usr/bin/ld: ../src/librd
do you guys have any idea if this is recent? I remember being able to compile a month ago
Adrian Costin
@adriancostin6
nevermind, fixed it myself. opened a pull request here: edenhill/librdkafka#3909
all the best