Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 27 05:03
    pavansai8 edited #116
  • Sep 25 22:54
    bputt edited #569
  • Sep 25 22:14
    bputt opened #569
  • Sep 25 03:06
    pavansai8 edited #98
  • Sep 24 23:30
    pmbuko opened #568
  • Sep 23 04:40

    bai on master

    Replace '-' with '_' for Env R… Merge pull request #567 from de… (compare)

  • Sep 23 04:40
    bai closed #567
  • Sep 23 04:33
    dellalu opened #567
  • Sep 16 16:23
    butterflyinfly opened #566
  • Sep 16 08:02
    ethicalmohit closed #561
  • Sep 10 06:51
    bhasvij opened #565
  • Sep 06 16:47

    bai on master

    Update the sarama TLS config to… Merge pull request #563 from ze… (compare)

  • Sep 06 16:47
    bai closed #563
  • Sep 05 18:16

    bai on master

    Add support for group member me… Merge pull request #562 from ch… (compare)

  • Sep 05 18:16
    bai closed #562
  • Sep 05 18:16
    bai closed #560
  • Sep 05 18:16
  • Sep 05 18:13
  • Sep 05 18:09
    zerowidth opened #564
  • Sep 05 17:44
    zerowidth edited #563
Todd Palino
@toddpalino
By default, Burrow consumes __consumer_offsets from the beginning. If you want to change this behavior, and always start from the tail (throwing away any old offsets on startup), you need to set the start-latest configuration to true - https://github.com/linkedin/Burrow/wiki/Consumer-Kafka
ericfgarcia
@ericfgarcia
Howdy
Quick question. I saw a this ticket: linkedin/Burrow#440 and was wondering if there was a timeline for Kafka 2.0 support?
Todd Palino
@toddpalino
As noted in the ticket, it's really a factor of when Sarama, the underlying Kafka library, supports 2.0. Right now it looks like some work has been done in their master branch, but it has not made it to release yet.
ericfgarcia
@ericfgarcia
Thanks for the clarification @toddpalino. I'll keep an eye on that project
ericfgarcia
@ericfgarcia
@toddpalino I noticed this PR linkedin/Burrow#447 and was wondering what the timeline would be to get it merged into master? Also, is there any testing required for this and do you need some volunteers for that?
Todd Palino
@toddpalino
I'm going to ask @bai to take a look at it. The testing that's already in CI should be sufficient for this. I've had to step back from some of the active work a little bit, but @ratishr from the Kafka SRE team here is taking lead on it.
ericfgarcia
@ericfgarcia
@toddpalino @bai Appreciate you guys looking at this and getting it merged to master.
SanketMore
@SanketMore
Hi, I using Burrow and made the path start with /burrow/v3 instead of /v3.
https://github.com/linkedin/Burrow/blob/master/core/internal/httpserver/coordinator.go#L127
Is this something others want? I can submit a PR for this change
Karthick Nagarajan
@nkchenni
Hi @toddpalino & other, I've been running burrow since 3 years without an issue. now after a service restart im unable to get the actual consumer lists on burrow. all along the broker was running in log.message.format.version=0.9.0 and now log.message.format.version=0.11.0. will this cause an issue in burrow??
darkknight100
@darkknight100

Hello, I was trying to install burrow on AWS ec2 instance, but when I am running "go get github.com/linkedin/Burrow", this error is coming:In file included from /usr/include/features.h:447:0,
from /usr/include/bits/libc-header-start.h:33,
from /usr/include/stdint.h:26,
from /usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h:9,
from src/github.com/DataDog/zstd/zstd.go:6:
/usr/include/gnu/stubs.h:7:11: fatal error: gnu/stubs-32.h: No such file or directory

include <gnu/stubs-32.h>

       ^~~~~~~~~~~~~~~~

compilation terminated. I have googled it, and installed all the dependencies. Then also it is giving me the same error.

Todd Palino
@toddpalino
@darkknight100 - You're getting a failure underneath in DataDog? That's not a dependency of Burrow, so I have no idea what you might be running into there. That looks like something underlying in go
@nkchenni the log message format on the brokers should not affect Burrow at all. Are you running the most recent version of Burrow?
Todd Palino
@toddpalino
So, I'm realizing that when I transferred out of full time Kafka work within LinkedIn, things took a little bit of a stagnant turn. I apologize for this - it was not the intention. I'm checking into how things can be properly supported moving forwards (by me or someone else)
And I really appreciate @bai for being there and handling a lot of stuff in the meantime.
Vlad Gorodetsky
@bai
:+1: I've been trying to be extremely careful (maybe too careful) and ship things that we tested at scale. Other than that, that's quite of a mergefest, thanks @toddpalino!
Peter Bukowinski
@pmbuko
Greetings. I tested upgrading to burrow v1.2.0 last week. When I tried setting kafka-version="2.0.1" in burrow.toml to match my kafka broker version, burrow panics with this message: panic: Unknown Kafka Version: 2.0.1
Release highlights specificall mention support for Kafka up to version 2.1.0, so I’m not sure what’s wrong.
Vlad Gorodetsky
@bai
@pmbuko Hmmm, version 2.0.1 is supported according to https://github.com/linkedin/Burrow/blob/master/core/internal/helpers/sarama.go#L49. Note that there was no protocol difference between 2.0.0 and 2.0.1 (both point to sarama.V2_0_0_0) — could you please try running with kafka-version="2.0.0"?
Peter Bukowinski
@pmbuko
I tried that
@bai Sorry, premature return and no edit option on my mobile interface. I tried 2.0.0 after it failed with 2.0.1 and saw the equivalent error.
alian
@lianfulei
hello,i am from china
Ana Czarnitzki
@czarnia
Hello! I been trying to use burrow but when I try to do a GET for a consumer retrieved for the GET /v3/kafka/consumer, I get a "not found" error.
Peter Bukowinski
@pmbuko
@czarinia Is the cluster listed at /v3/kafka ? The correct path for the consumer list is /v3/kafka/[cluster_name]/consumer.
alian
@lianfulei
how to send mail to two mailboxes?

to The email address to send messages to.

[notifier.default]
to="lilei@gmail.com,hanmeimei@outlook.com"

Can i write like this?

Can it email more people?
Peter Bukowinski
@pmbuko
alian
@lianfulei
My configuration does not work. Do I need to change the source code?
[notifier.default]
to="lilei@gmail.com,hanmeimei@outlook.com"
it is only be sent to the first mailbox
@pmbuko
Pranav Honrao
@pranavh1991_twitter
Does Burrow helps for the monitoring the kafka offsets stored in cassandra
George Smith
@GeoSmith
No, Burrow uses the internal consumer offset topic Kafka manages
Unless, things have changed
abajaj25
@abajaj25
I just tried building the docker image and 'docker-compose' and I keep on seeing the error 'config file not found /etc/burrow'
has anyone seen this before?
linkedin/Burrow#474 similar to this
is there a workaround?
Peter Bukowinski
@pmbuko
Is there a timeline on adding support for kafka 2.3.x? I recently upgraded a cluster and burrow is now reporting failed to decode errors.
{"level":"warn","ts":1569348946.1901448,"msg":"failed to decode","type":"module","coordinator":"consumer","class":"kafka","name":"kafka_ash","offset_topic":"__consumer_offsets","offset_partition":4,"offset_offset":9603105057,"message_type":"offset","group":"core-fb","topic":"activity","partition":235,"reason":"value version","version":3}
Basically, I jumped the gun on upgrading and forgot to test burrow against the new kafka version...
I’m using Burrow 1.2.0
Peter Bukowinski
@pmbuko
Reverting the log.message.format.version to 2.0 wasn’t enough to fix it. I had to also revert inter.broker.protocol.version to 2.0. (2.1 would probably have also been fine, but reverting to 2.0 will save me from having to rolling restart many more clusters.)
@toddpalino ^
Peter Bukowinski
@pmbuko
Oops. Should have tagged @bai ^
Also, I’m still not able to set kafka-version to 2.1, let alone 2.0.
{"level":"panic","ts":1569366615.079633,"msg":"Unknown Kafka Version: 2.1.0"}
panic: Unknown Kafka Version: 2.1.0 [recovered]
    panic: Unknown Kafka Version: 2.1.0 [recovered]
    panic: Unknown Kafka Version: 2.1.0
Peter Bukowinski
@pmbuko
Upon further testing, burrow 1.2.2 works with kafka 2.3.1 if I set the kafka-version client-profile parameter to 2.1.0. Crisis averted.
George Smith
@GeoSmith
@pmbuko Curious, are you using any type of visual frontend for your burrow installation? Maybe to view partitions that are in error?
Peter Bukowinski
@pmbuko
@GeoSmith The only visualization I have for burrow is grafana. I export the metrics burrow generates to my metrics backend once per minute. The issue I was encountering is that the consumer_offsets topic message format changed between kafka 2.0 and 2.1. I had it set incorrectly after I upgraded my clusters, so burrow couldn’t read the consumer offsets. Fortunatley, the offset message format hasn’t changed since 2.1.