Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    flyleegit
    @flyleegit
    yes,I have already installed
    flyleegit
    @flyleegit
    I have solved the issue, by export the LD_LIBRARY_PATH.Thanks
    COLE Edouard
    @sandvige
    Hello, we are handling tons of messages with php-rdkafka and librdkafka, and this is very good job guys. We're wondering if some of you already encountered this behaviour: when a consumer have to be properly stopped, we send a SIGINT to the PHP process, and we call pcntl_signal_dispatch() after each consumed message. We registered a callback on SIGINT, and simply tell the main loop to quit as soon as possible. This is working like a charm when the consumer have been started for a few minutes, but when the PHP process is running for... let say 10 days, it never finishes. It looks like it is waiting for things to finish, but this is never happening, and we have to kill it.
    We're using librdkafka (2213fb29f98a7a73f22da21ef85e0783f6fd67c4) and php-rdkafka (86feceba2469dd3442d96d0f73ea65c916b8f17f) and PHP 5.6.30
    (and kafka version: kafka_2.11-0.10.0.1)
    Magnus Edenhill
    @edenhill
    @sandvige 0.9.1 is really old and the consumer has seen a lot of fixes since then. I suggest you try librdkafka v0.9.5 (or 0.11.0 which will be released soon)
    COLE Edouard
    @sandvige
    @edenhill Thanks!
    Do you maintain somewhere an upgrade guide to know what changed since? We noticed the TopicConf class have been moved
    (we're investigating as it looks this class is not found, but i'm wondering why.. it looks more like a configuration issue)
    COLE Edouard
    @sandvige
    I was talking about the php classes, sorry :). Thanks!
    Richard
    @computerichy

    Hello,

    Since restarting our web servers, librdkafka is failing to initialise. It's possible that PHP has been updated, but nevertheless it has been running PHP 7 throughout, and was working at one time. This is the error being logged in the apache error_logs:

    PHP Warning: PHP Startup: rdkafka: Unable to initialize module\nModule compiled with build ID=API20151012,NTS\nPHP compiled with build ID=API20151012,TS\nThese options need to match\n in Unknown on line 0

    Here's the current 'php -v' output:

    PHP 7.0.18 (cli) (built: Apr 11 2017 14:25:57) ( NTS )
    Copyright (c) 1997-2017 The PHP Group
    Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
    with Zend OPcache v7.0.18, Copyright (c) 1999-2017, by Zend Technologies

    Magnus Edenhill
    @edenhill
    @computerichy looks like the module was compiled for another PHP version or with other flags. Try rebuilding it
    Richard
    @computerichy
    @edenhill Thanks Magnus, the module was installed via PECL. Running 'pecl list' prints this 'rdkafka 3.0.3 stable'.
    Is it supposed to be marked as non-thread safe?
    Magnus Edenhill
    @edenhill
    @computerichy librdkafka is thread-safe, I have no idea if the php module is, I've never used it :|
    Richard
    @computerichy
    @edenhill Ahhh, turns out the PHP module isn't. The solution was to switch back from the Worker MPM to Prefork. Thanks!
    remizyaka
    @remizyaka
    Hi. is it possible to commit offset manually when using low level consumer? Consumer::offsetStore for some reason isn't updating storred offset after offset automatically reset
    Thomas Ploch
    @tPl0ch
    Hi, since Kafka 0.10.1 the heartbeat was moved out of poll to a separate heartbeat thread, which transforms the formerly single threaded consumers to multi-threaded ones. How does php-rdkafka handle that now?
    Magnus Edenhill
    @edenhill
    @tPl0ch that's a Java client implementation change. librdkafka has always used background threads for all broker communications, including heartbeats
    Thomas Ploch
    @tPl0ch
    @edenhill thanks, we are just sitting in a confluent training and were a bit puzzled, but that makes sense :)
    Magnus Edenhill
    @edenhill
    have fun :)
    Thomas Ploch
    @tPl0ch
    @edenhill https://github.com/confluentinc/libserdes - we have built a PHP implementation of Avro serialization and schema registry, but I think for high throughput, low-latency requirements it might be better to actually write a PHP extension. Are you aware of any people/projects already working on that?
    Magnus Edenhill
    @edenhill
    @tPl0ch I don't believe libserdes will necessarily make your PHP app faster since you can only use the schema-registry integration, not the actual Avro Serdes. You'll want to use a native Avro serdes for PHP to make it meaningful (deserializing an avro-formatted message value to a C or C++ object won't do you any good)
    Thomas Ploch
    @tPl0ch
    OK, the schema-registry integration is limited by the network latency anyway.
    So does not really make sense.
    Thanks.
    Magnus Edenhill
    @edenhill
    And schemas are cached after the initial registration and lookup so performance wise it wont make a difference
    Thomas Ploch
    @tPl0ch
    Yeah in our PHP implementation we have various cache adapters so teams can actually choose what fits them best.
    Magnus Edenhill
    @edenhill
    :+1:
    Thomas Ploch
    @tPl0ch
    So if anybody is interested in using the Schema Registry within their PHP projects:
    https://github.com/flix-tech/schema-registry-php-client
    Magnus Edenhill
    @edenhill
    Great stuff :)
    Thomas Ploch
    @tPl0ch
    Thanks, we will also release the higher level avro serialization lib containing the whole libserdes flow: serialize -> schema registry -> wire protocol | wire protocol -> schema registry -> deserialize in a few weeks when the lib passes our open source quality criteria (mainly docs).
    Magnus Edenhill
    @edenhill
    :100:
    Joe Green
    @joegreen88
    Is it possible with phprdkafka to commit offset manually with high level consumer?
    Also is it possible to seek a particular offset in the high level consumer?
    thx
    COLE Edouard
    @sandvige
    @joegreen88 I think you can't commit a specific offset, nor seek for a particular offset
    By design, Kafka is not a database, you cannot use it as a key/value store :P
    Joe Green
    @joegreen88
    @sandvige when I say commit offset manually, I don't mean a custom offset, I just mean choose when to send the commit
    when consumer received message I probably want to do something with it and then commit the offset
    COLE Edouard
    @sandvige
    Sure, there's a commit() function
    Joe Green
    @joegreen88
    and I should turn off autocommit settings I suppose
    Joe Green
    @joegreen88
    thanks :D
    Seth Albanese
    @salbanese
    anyone know of an example of handling connection failures? Seems if rdkafka can't connect to the brokers its just blocking indefinitely, which is less than ideal
    COLE Edouard
    @sandvige
    Do you expect messages to be dropped when kafka is not reachable?
    (I assume you're asking for the publishing side, not the consuming side, right?)
    Seth Albanese
    @salbanese
    yes, the publish side. I want to take some compensating action and complete the request, but it just hangs there
    ok, seems message.timeout.ms is what I was looking for