Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
nicosmaris
@nicosmaris
hi all, is there an official docker image for Burrow?
nicosmaris
@nicosmaris
Todd Palino
@toddpalino
That one gets compiled automatically when there is a release. However, I have not functionally tested it to make sure it's 100%. There's also a separate Dockerfile in the project root that you can use to build your own.
Aleh Danilovich
@MCPDanilovich
Hello everyone. Gus who can explain, what kafka_burrow_total_lag metric mean ?
ys-achinta
@ys-achinta

what is the difference between storage module and the evaluator caching module in burrow.

i mean the default expiry value in storage module is 7 days
what difference would it make if we set the expire-cache config to 60 seconds

class-name="inmemory"
expire-group=604800
class-name="caching"
expire-cache=60

@toddpalino

Todd Palino
@toddpalino
@MCPDanilovich That's the sum of all partition lag for a given consumer group
@ys-achinta The storage module stores raw group and offset information. The caching module uses that data to calculate consumer status. So the expiration for the storage module is how long a group that has not committed offsets is kept around for, while the caching module expiration is how long the status calculation is considered fresh for
Aleh Danilovich
@MCPDanilovich
@toddpalino is this mean that kafka_burrow_total_lag = sum(kafka_burrow_partition_lag) ? And total lag for topic will be sum(lags of all partitions in topic)?
Todd Palino
@toddpalino
So I'm not sure exactly what you're looking at, as burrow does not put out metrics named that way, and it doesn't assemble anything by topic. I suggest you look at https://github.com/linkedin/Burrow/wiki/http-request-consumer-group-status for the detail on the status response for field definitions.
Aleh Danilovich
@MCPDanilovich
Sorry, it is my mistake. It was metrics from burrow-exporter. But it is not your project )
ys-achinta
@ys-achinta

the burrow consumer (which consumes from __consumer_offsets topic) doesn't show up in the list of consumers.
any way to see this.

we lag seems to be constantly growing. is the possible that the burrow consumer for one cluster has stopped, because of which there is a high lag?

how to check this

@toddpalino

Todd Palino
@toddpalino
Burrow doesn't use a consumer group, so it wouldn't show up. However, you are correct that Burrow itself can be lagging. This is something we've noticed internally especially when it's throttled on CPU
It is possible for a simple consumer to determined its own lag, however, as the consume response contains the high water mark for the partition. This is something that should be brought in along with other metrics for Burrow's performance (such as how long storage and evaluation operations are taking, HTTP statistics, counters on groups, topics, partitions, offsets, etc.)
It will also require that Sarama exposes the information to us. That may require a change to that client.
ys-achinta
@ys-achinta

@toddpalino we restarted the burrow process.

if burrow consumer lag is the cause, then restart should fix the issue.
even then we saw very older values being reported as the offsets

"start": { "offset": 222725, "timestamp": 1535353022126, "lag": 328 }, "end": { "offset": 222735, "timestamp": 1535353473415, "lag": 318 }, "current_lag": 665, "complete": 1 }, "totallag": 33186

could there be any other reason for this?

Todd Palino
@toddpalino
By default, Burrow consumes __consumer_offsets from the beginning. If you want to change this behavior, and always start from the tail (throwing away any old offsets on startup), you need to set the start-latest configuration to true - https://github.com/linkedin/Burrow/wiki/Consumer-Kafka
ericfgarcia
@ericfgarcia
Howdy
Quick question. I saw a this ticket: linkedin/Burrow#440 and was wondering if there was a timeline for Kafka 2.0 support?
Todd Palino
@toddpalino
As noted in the ticket, it's really a factor of when Sarama, the underlying Kafka library, supports 2.0. Right now it looks like some work has been done in their master branch, but it has not made it to release yet.
ericfgarcia
@ericfgarcia
Thanks for the clarification @toddpalino. I'll keep an eye on that project
ericfgarcia
@ericfgarcia
@toddpalino I noticed this PR linkedin/Burrow#447 and was wondering what the timeline would be to get it merged into master? Also, is there any testing required for this and do you need some volunteers for that?
Todd Palino
@toddpalino
I'm going to ask @bai to take a look at it. The testing that's already in CI should be sufficient for this. I've had to step back from some of the active work a little bit, but @ratishr from the Kafka SRE team here is taking lead on it.
ericfgarcia
@ericfgarcia
@toddpalino @bai Appreciate you guys looking at this and getting it merged to master.
SanketMore
@SanketMore
Hi, I using Burrow and made the path start with /burrow/v3 instead of /v3.
https://github.com/linkedin/Burrow/blob/master/core/internal/httpserver/coordinator.go#L127
Is this something others want? I can submit a PR for this change
Karthick Nagarajan
@nkchenni
Hi @toddpalino & other, I've been running burrow since 3 years without an issue. now after a service restart im unable to get the actual consumer lists on burrow. all along the broker was running in log.message.format.version=0.9.0 and now log.message.format.version=0.11.0. will this cause an issue in burrow??
darkknight100
@darkknight100

Hello, I was trying to install burrow on AWS ec2 instance, but when I am running "go get github.com/linkedin/Burrow", this error is coming:In file included from /usr/include/features.h:447:0,
from /usr/include/bits/libc-header-start.h:33,
from /usr/include/stdint.h:26,
from /usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h:9,
from src/github.com/DataDog/zstd/zstd.go:6:
/usr/include/gnu/stubs.h:7:11: fatal error: gnu/stubs-32.h: No such file or directory

include <gnu/stubs-32.h>

       ^~~~~~~~~~~~~~~~

compilation terminated. I have googled it, and installed all the dependencies. Then also it is giving me the same error.

Todd Palino
@toddpalino
@darkknight100 - You're getting a failure underneath in DataDog? That's not a dependency of Burrow, so I have no idea what you might be running into there. That looks like something underlying in go
@nkchenni the log message format on the brokers should not affect Burrow at all. Are you running the most recent version of Burrow?
Todd Palino
@toddpalino
So, I'm realizing that when I transferred out of full time Kafka work within LinkedIn, things took a little bit of a stagnant turn. I apologize for this - it was not the intention. I'm checking into how things can be properly supported moving forwards (by me or someone else)
And I really appreciate @bai for being there and handling a lot of stuff in the meantime.
Vlad Gorodetsky
@bai
:+1: I've been trying to be extremely careful (maybe too careful) and ship things that we tested at scale. Other than that, that's quite of a mergefest, thanks @toddpalino!
Peter Bukowinski
@pmbuko
Greetings. I tested upgrading to burrow v1.2.0 last week. When I tried setting kafka-version="2.0.1" in burrow.toml to match my kafka broker version, burrow panics with this message: panic: Unknown Kafka Version: 2.0.1
Release highlights specificall mention support for Kafka up to version 2.1.0, so I’m not sure what’s wrong.
Vlad Gorodetsky
@bai
@pmbuko Hmmm, version 2.0.1 is supported according to https://github.com/linkedin/Burrow/blob/master/core/internal/helpers/sarama.go#L49. Note that there was no protocol difference between 2.0.0 and 2.0.1 (both point to sarama.V2_0_0_0) — could you please try running with kafka-version="2.0.0"?
Peter Bukowinski
@pmbuko
I tried that
@bai Sorry, premature return and no edit option on my mobile interface. I tried 2.0.0 after it failed with 2.0.1 and saw the equivalent error.
alian
@lianfulei
hello,i am from china
Ana Czarnitzki
@czarnia
Hello! I been trying to use burrow but when I try to do a GET for a consumer retrieved for the GET /v3/kafka/consumer, I get a "not found" error.
Peter Bukowinski
@pmbuko
@czarinia Is the cluster listed at /v3/kafka ? The correct path for the consumer list is /v3/kafka/[cluster_name]/consumer.
alian
@lianfulei
how to send mail to two mailboxes?

to The email address to send messages to.

[notifier.default]
to="lilei@gmail.com,hanmeimei@outlook.com"

Can i write like this?

Can it email more people?
Peter Bukowinski
@pmbuko
alian
@lianfulei
My configuration does not work. Do I need to change the source code?
[notifier.default]
to="lilei@gmail.com,hanmeimei@outlook.com"
it is only be sent to the first mailbox
@pmbuko
Pranav Honrao
@pranavh1991_twitter
Does Burrow helps for the monitoring the kafka offsets stored in cassandra
George Smith
@GeoSmith
No, Burrow uses the internal consumer offset topic Kafka manages
Unless, things have changed