Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Mar 27 08:23
    Kripsri edited #624
  • Mar 27 08:23
    Kripsri opened #624
  • Mar 19 17:17
    santiagomoneta opened #623
  • Mar 16 05:58

    bai on v1.3.3

    (compare)

  • Mar 16 05:57

    bai on changelog

    (compare)

  • Mar 16 05:57

    bai on master

    Add changelog for 1.3.3 Merge pull request #622 from li… (compare)

  • Mar 16 05:57
    bai closed #622
  • Mar 16 05:55
    bai opened #622
  • Mar 16 05:54

    bai on changelog

    Add changelog for 1.3.3 (compare)

  • Mar 10 15:52
    vas78 edited #621
  • Mar 10 15:51
    vas78 edited #621
  • Mar 10 15:46
    vas78 opened #621
  • Mar 09 10:28
    yhk6190 closed #551
  • Mar 02 18:33
    rjh-yext closed #620
  • Mar 02 18:32
    rjh-yext review_requested #620
  • Mar 02 18:32
    rjh-yext opened #620
  • Feb 28 13:06
    mradha1 closed #619
  • Feb 28 06:51
    mradha1 edited #619
  • Feb 28 06:51
    mradha1 opened #619
  • Feb 27 05:01
    timbertson-zd opened #618
Darkknight
@leegin
I have a dockerized burrow setup to monitor my dev kafka environment. When I check /v3/kafka/kafka-dev/consumer, it lists only 2 consumers."consumers": [
"adept-tracker-atkins",
"burrow-kafka-dev"
],
"request": {
"url": "/v3/kafka/kafka-dev/consumer",
"host": "a9720a5c007a"
}
}
But I have 15 consumers in total. All the consumers are committing offsets to kafka. When I restart the burrow container it lists all the consumers but their status will be "NOT FOUND".
My actual list of consumers are as follows.
{"error":false,"message":"consumer list returned","consumers":["console-consumer-22451","adept-egress-processing-generic2","console-consumer-83905","adept-tracker-atkins-dev","connect-ingress-gps-location-s3","connect-egress-gps-location-s3","console-consumer-92438","burrow-kafka-dev","connect-driving-events-egress-log-s3","connect-batch-reject-s3","connect-egress-gps-location-log-s3","connect-reject-gps-location-s3","adept-tracker-atkins","connect-reject-ingress-gps-location-s3","console-consumer-24933","adept-egress-processing-generic1","connect-driving-events-validated-enriched-s3","adept-egress-processing-tomtom","connect-driving-events-ingress-raw-rejects-s3","connect-driving-events-ingress-raw-s3","adept-egress-processing-example","adept-stream-core-processing-drivingevents"],"request":{"url":"/v3/kafka/kafka-dev/consumer","host":"e0275992bad7"}}
I get the above when I try to list the consumers immediately after a restart.
My burrow.toml file is as follows.

[general]
pidfile="burrow.pid"
stdout-logfile="burrow.out"

[logging]
filename="logs/burrow.log"
level="info"
maxsize=100
maxbackups=30
maxage=10
use-localtime=false
use-compression=true

[zookeeper]
servers=[ "zookeeper:2181" ]
timeout=6
root-path="/burrow"

[client-profile.kafka10]
kafka-version="0.10.1.0"
client-id="burrow-client"

[client-profile.zk-kafka10]
kafka-version="0.10.1.0"
client-id="burrow-client"

[cluster.kafka-dev]
class-name="kafka"
servers=[ "kafka:9092" ]
client-profile="kafka10"
topic-refresh=60
offset-refresh=30

[consumer.kafka-dev]
class-name="kafka"
cluster="kafka-dev"
client-profile="kafka10"
servers=[ "kafka:9092" ]
group-blacklist="^(console-consumer-|python-kafka-consumer-).*$"
group-whitelist=""

[consumer.kafka-dev_zk]
class-name="kafka_zk"
cluster="kafka-dev"
client-profile="zk-kafka10"
servers=[ "zookeeper:2181" ]
zookeeper-timeout=30
group-blacklist="^(console-consumer-|python-kafka-consumer-).*$"
group-whitelist=""

[httpserver.default]
address=":8005"

[storage.default]
class-name="inmemory"
workers=20
intervals=15
expire-group=604800
min-distance=1


@toddpalino Any help would be appreciated.
Darkknight
@leegin
Also I get the below in the logs.
/logs # less burrow.log | grep "node does not exist"
{"level":"error","ts":1572966499.745737,"msg":"failed to list groups","type":"module","coordinator":"consumer","class":"kafka_zk","name":"kafka-dev_zk","error":"zk: node does not exist"}
{"level":"error","ts":1572967706.7760856,"msg":"failed to list groups","type":"module","coordinator":"consumer","class":"kafka_zk","name":"kafka-dev_zk","error":"zk: node does not exist"}/logs # less burrow.log | grep "node does not exist"
{"level":"error","ts":1572966499.745737,"msg":"failed to list groups","type":"module","coordinator":"consumer","class":"kafka_zk","name":"kafka-dev_zk","error":"zk: node does not exist"}
{"level":"error","ts":1572967706.7760856,"msg":"failed to list groups","type":"module","coordinator":"consumer","class":"kafka_zk","name":"kafka-dev_zk","error":"zk: node does not exist"}
Peter Bukowinski
@pmbuko
@leegin If all your consumers are committing offsets to kafka, then you don’t need the [consumer.kafka-dev_zk] section. That’s why you’re seeing the "class":"kafka_zk","name":"kafka-dev_zk","error":"zk: node does not exist" errors.
I’m not sure why your consumers are disappearing after startup. Are they actively committing offsets?
Sonu Kr. Meena
@sahilsk
Why my burrow cosumer itself is lagging ?
image.png
I've used the master branch to build the burrow binary

the 1.2.2 release of burrow doesn't show this problem. More message other consumers consume, more of this burrow-consumer lags increases. Dont' know understnd why this is happening.

I thought burrow doens't create consumer group

Sonu Kr. Meena
@sahilsk
[general]
pidfile="burrow.pid"
stdout-logfile="burrow.out"

[logging]
filename="/opt/burrow/logs/burrow.log"
level="info"
maxsize=500
maxbackups=10
maxage=10
use-localtime=false
use-compression=true

[zookeeper]
servers=["xxxx"]
#servers=["zk-1.xxxxx.com", "zk-2.xxxxx.com", "zk-3.xxxxx.com"]
timeout=6
root-path="/xxxx/prod/burrow/notifier"

[httpserver.mylistener]
address=":8080"
timeout=300

[storage.mystorage]
class-name="inmemory"

[evaluator.mystorage]
class-name="caching"

## mycluster1
###################################################

[client-profile.myclient]
kafka-version="1.0.0"

[cluster.mycluster1]
class-name="kafka"
servers=["10.177.23.221:9092","10.177.21.232:9092","10.177.22.12:9092"]
client-profile="myclient"

[consumer.myconsumers]
class-name="kafka"
cluster="mycluster1"
servers=["10.177.23.221:9092","10.177.21.232:9092","10.177.22.12:9092"]
start-latest=false


## mycluster2
###################################################

[client-profile.mycluster2-client]
kafka-version="2.2.1"
client-id="burrow-lagchecker-mycluster2-client"

[cluster.mycluster2]
class-name="kafka"
servers=["kfk-1.xxxxx.com:9092", "kfk-2.xxxxx.com:9092", "kfk-3.xxxxx.com:9092"]
client-profile="mycluster2-client"

[consumer.mycluster2-consumers]
class-name="kafka"
cluster="mycluster2"
expire-group=5
servers=["kfk-1.xxxxx.com:9092", "kfk-2.xxxxx.com:9092", "kfk-3.xxxxx.com:9092"]
client-profile="mycluster2-client"
start-latest=false
SahilAggarwalG
@SahilAggarwalG
{"level":"warn","ts":1578400037.2808056,"msg":"failed to decode","type":"module","coordinator":"consumer","class":"kafka","name":"local","offset_topic":"__consumer_offsets","offset_partition":45,"offset_offset":55142,"message_type":"metadata","group":"INTERACTION_TO_CONVERSATION_STREAM_PM_GROUP","reason":"value version","version":2}
getting following error
kafka version 2.1.0 and burro 1.2.2
Sonu Kr. Meena
@sahilsk
yeah, the current releast 1.2.2 is not compatible with 2.x yet
build from the master, it has 2.x support
but broken one
may or may not work for you
SahilAggarwalG
@SahilAggarwalG
when support will come , in documenttion it is written that till kafka version 2.1.0
iis supported
Sonu Kr. Meena
@sahilsk
correct. documentation in the master branch
SahilAggarwalG
@SahilAggarwalG
i read follwing document
Peter Bukowinski
@pmbuko
When I attempt to build version 1.3.0 (or 1.3.1), I get the following error:
# github.com/golang/dep/gps
  ../../pkg/mod/github.com/golang/dep@v0.5.4/gps/constraint.go:149:4: undefined: semver.Constraint
Peter Bukowinski
@pmbuko
Never mind. The above issue was my own fault. I’m building for debian and had to update my build rules file to stop using dep.
Vadim
@vadeg

Hi everyone,

I would like to use Burrow to collect different information from Amazon MSK clusters (https://aws.amazon.com/msk/). For that I need to fetch some configuration from AWS. For example Zookeeper cluster hosts, Kafka broker hosts. I can get this configuration and provide it to Burrow using on of the viper features (config file, env vars) but I have to do this every time when configuration changes. Honestly it will happen very rare.
I would like to hear your opinion about the following: what do you think about including AWS MSK support in Burrow? If you think it makes sense then what do you think would be the optimal way to implement it?

Peter Bukowinski
@pmbuko
@vadeg Burrow doesn’t need to access the MSK zookeepers. The versions of kafka available in MSK store offsets in the __consumer_offsets topic. Burrow does need a zookeeper cluster, but that is for burrow’s own metadata. https://github.com/linkedin/Burrow/wiki/Configuration#zookeeper
Suramya Shah
@ss22ever

Hi I am trying to use burrow with amazon msk version 1.1.1 with traffic between clients and brokers as only TLS encrypted traffic is allowed.
I tried using the following properties:

[tls.tlsonly]
certfile="/opt/ssl/kafkakeystore/client.crt"
keyfile="/opt/ssl/kafkakeystore/client.key"
cafile="/opt/ssl/kafkakeystore/ca.crt"
noverify=false

But I am getting the error level":"error","ts":1581337063.4007235,"msg":"failed to start client","type":"module","coordinator":"cluster","class":"kafka","name":"pre-glp1-glp2-int-msk3","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)"}

KyleBoyer
@KyleBoyer
If a topic is removed from a consumer group’s subscribed list, when will Burrow remove that topic from the consumer group’s results?
Ivan Majnaric
@IvanMajnari_twitter
Hello :=)
Sorry for silly question maybe, but, is there any implementation of metrics being sent to prometheus from Burrow?
KyleBoyer
@KyleBoyer
Typo^
adosapati
@adosapati
Hey, Anyone monitoring storm consumers using burrow?
we use 1.2.0 version of burrow and storm consumers don't appear there
Ivan Majnaric
@IvanMajnari_twitter
@KyleBoyer Thanks! Will try it tomorrow! Do you maybe know how reliable that exporter really? Have you tried it? :)
KyleBoyer
@KyleBoyer
Yup, I have/currently use it! Works really well!
frankhenderson
@frankhenderson
hi everyone :) I've got Burrow 1.3.2 and generalmills/burrowui running and I'm wondering about the one consumer that seems unlike all the others ... its lag is very high and its topic is __consumer_offsets -- Is my setup missing something? Or is it normal that the total lag for this consumer keeps increasing and it has an ERR state?
frankhenderson
@frankhenderson
This lagging consumer's name seems related to a section of my config which I called [consumer.myconsumers] ... the consumer's name is burrow-myconsumers
And that section of my config looks like this:
[consumer.myconsumers]
class-name="kafka"
cluster="mycluster"
servers=["host1:6667","host2:6667","host3:6667"]
frankhenderson
@frankhenderson
All of my other consumers have a total lag of 0. As for kafka version, I have this file: kafka_2.12-2.4.0.jar
frankhenderson
@frankhenderson
I've seen my other consumers occasionally have total lag > 0 and then later they recover back to total lag == 0 ... but the lag for this one special consumer that refers to topic __consumer_offsets keeps going up. It's over 14 million now and in ERR state.