Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Magnus Edenhill
    @edenhill
    try rdkafka_performance -h for help
    Craig Patrick
    @cpats007
    perfect, thank you
    ahhh I was checking the source - i’ll go for that superb - thanks
    Magnus Edenhill
    @edenhill
    Make sure to buckle up before you run it!
    Craig Patrick
    @cpats007
    :D
    Magnus Edenhill
    @edenhill
    (that will inevitably come back and bite me)
    Craig Patrick
    @cpats007
    lol
    Craig Patrick
    @cpats007
    hmmm, slow, but that could be our Kafka set up :/
    % Sending 500000 messages of size 100 bytes
    % 500000 messages produced (50000000 bytes), 0 delivered (offset 0, 0 failed) in 1000ms: 0 msgs/s and 0.00 Mb/s, 0 produce failures, 500000 in queue, no compression
    % 500000 messages produced (50000000 bytes), 500000 delivered (offset 0, 0 failed) in 1809ms: 276372 msgs/s and 27.64 Mb/s, 0 produce failures, 0 in queue, no compression
    Magnus Edenhill
    @edenhill
    try more messages to let it ramp up
    5 M messages
    Craig Patrick
    @cpats007
    yeah I’ll give it a go - I’m trying to replicate how we’re using it and we’re sending like 1 or 2 messages from each app per instantiation
    Craig Patrick
    @cpats007
    any thoughts why this is happening:
    ./rdkafka_performance -P -t test2 -s 1500 -c 500000 -v -l -u -m "_____________Test1:TwoBrokers:500kmsgs:1500bytes" -S 1 -a 1 -b 192.168.50.194,192.168.50.195
    % Sending 500000 messages of size 30 bytes
    ignoring the size options
    Paul Dragoonis
    @dragoonis
    not seen this tool before
    looks interesting
    Craig Patrick
    @cpats007
    also, am I right in thinking -p should set the number of partitions for the topic? or is it a specific partition?
    Magnus Edenhill
    @edenhill
    @cpats007 -p sets the partition to produce to, if you leave it out it will use the default partitioner to select partition
    I'm not sure why it misses -s, weird
    oh, right, -m sets a pattern, ignoring -s.
    so the length will be the length of the pattern
    Craig Patrick
    @cpats007
    no, it must have been some other reason, the -s is working now, so prehaps I had the configs set up wierdly or something
    Neil Young
    @nyoung
    heya folks - trying to use the php-rdkafka tool and it seems to be working well for our use case, except I can't figure out how to kill a publish or setup a publish timeout if there is some kind of connection error on all brokers
    it just seems to sit and spin for a long time for me - is there some way to force it to fail from PHP?
    Craig Patrick
    @cpats007
    what is the issue you are having, so if the producer can’t produce, you want it to fail in PHP?
    Neil Young
    @nyoung
    yeah, i don't want the php request to sit and wait forever - i need it to drop the message on the floor and complete
    Craig Patrick
    @cpats007
    does this not work:
    $topicConfig = new TopicConf();
    $topicConfig->set('message.timeout.ms', 1000ms);
    Neil Young
    @nyoung
    trying it
    thought i had been through all the settings - that one seems obvious
    (derp)
    Craig Patrick
    @cpats007
    obviously you’ll need to change it for your code, bu then this:
    /** @var ProducerTopic $kafkaTopic */
    $kafkaTopic = $this->getProducer()->newTopic($topicName, $topicConfig);
    Neil Young
    @nyoung
    yeah
    Craig Patrick
    @cpats007
    :)
    Neil Young
    @nyoung
    i'm loading a topic config
    Craig Patrick
    @cpats007
    :thumbsup:
    Neil Young
    @nyoung
    Aw yeah, that worked - thanks @cpats007
    Craig Patrick
    @cpats007
    no problem
    Neil Young
    @nyoung
    @cpats007 did you have any luck with improving the performance on 1 or 2 kafka messages per instantiation?
    Craig Patrick
    @cpats007
    @nyoung I’ve got it to around somewhere between 10ms and 28ms in different apps writing data to Kafka with a single message - these are messages of around 1.5k (1500 bytes) so I guess they could be considered large?
    Mangoer
    @mangoer-ys
    when set 'offset.store.method' = 'broker' , where is the true place of broker on
    borker
    who know
    Magnus Edenhill
    @edenhill
    @mangoer-ys Offsets are written to the __consumer_offsets topic (by the broker)
    Mangoer
    @mangoer-ys
    @edenhill But I have a doubt that the last modification time of __consumer_offsets in log-file is not modified as a consumer commits offset
    and the size of __consumer_offsets file is 0.....
    Magnus Edenhill
    @edenhill
    @mangoer-ys that's a replicated topic with 50 (by default) partitions
    Mangoer
    @mangoer-ys
    I'm sorry. I mean the size of log file in __consumer_offsets-0 ~ 49 is 0
    Magnus Edenhill
    @edenhill
    okay. are you using manual or auto commits?
    and have you configured a group.id? (required for broker based commits)
    Mangoer
    @mangoer-ys
    Oh I know why. It's my error. I found a mistake that I only see the last modification time of consumer_offsets directory is not modified , but only one log file in 50 consumer_offset directory has been modified in fact.