Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • May 20 12:59
    pamposh123 commented #3448
  • May 19 07:44
    jcchavezs commented #3448
  • May 18 21:00
    rswst8 synchronize #3449
  • May 18 20:57
    rswst8 commented #3449
  • May 18 20:15
    jcchavezs commented #1320
  • May 18 14:11
    pamposh123 commented #3448
  • May 18 09:51
    gkatzioura commented #200
  • May 18 03:33
    rswst8 commented #3449
  • May 18 03:29
    rswst8 commented #3442
  • May 18 03:27
    rswst8 opened #3449
  • May 18 02:23
    rswst8 commented #3431
  • May 17 20:23
    jcchavezs commented #3448
  • May 17 17:49
    pamposh123 opened #3448
  • May 17 17:49
    pamposh123 labeled #3448
  • May 17 12:06
    adamsmith118 commented #1299
  • May 16 11:15
    jcchavezs commented #3447
  • May 16 09:33
    Layfolk-zcy labeled #3447
  • May 16 09:33
    Layfolk-zcy opened #3447
  • May 16 03:31
    be-hase commented #1329
  • May 14 12:00
    jcchavezs commented #225
Martin Tacit
Hi All can somebody hint me on how to manage span creation in micro-services with reactive nature when we don't have opportunity to control all the threads...Or if there any way you can aggregate all the spans ?
David Sullivan
Hey, anybody tried running Zipkin with OpenSearch (AWS' Elasticsearch "fork"). There seems to be an issue with the version number:
 "status" : "DOWN",
 "zipkin" : {
   "status" : "DOWN",
   "details" : {
     "ElasticsearchStorage{initialEndpoints=https://xyz.es.amazonaws.com, index=zipkin}" : {
       "status" : "DOWN",
       "details" : {
         "error" : "java.lang.IllegalArgumentException: Elasticsearch versions 5-7.x are supported, was: 1.0"
Andriy Redko
@Le1632 you should use the property in opensearch.yml: compatibility.override_main_response_version
Nitish Goyal
Hi, I am using Zipkin with cassandra as the backend. Everything has been working fine for months, but now we have started seeing this error frequently in logs
java.util.concurrent.CompletionException: com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996) ~[?:1.8.0_242]
at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:110) ~[?:1.8.0_242]
at zipkin2.storage.cassandra.internal.call.ResultSetFutureCall.doEnqueue(ResultSetFutureCall.java:91) ~[zipkin-storage-cassandra-2.23.3-8.jar!/:2.23.3-8]
Did anyone encounter this? If yes, what could be the possible reason
One possible reason of it happening is increase in load. But this error should settle when the load decrease during non peak hours. On the other hand, it's a continuous error and impacting our ingestion into cassandra
hello, it is possible disable cutting of servicename in dependency ui page?
1 reply
Hello, I'm new to zipkin and I need to deploy it with a mysql database for storage on an openshift cluster . Is there any helm chart or docker image that is used for such an implementation ? Thanks !
5 replies
Hi, I am unable to see all the trace in zipkin. I am using spring-boot-dependencies 2.2.5.RELEASE. I tried with setting sampler.probability and sampler.percentage to 1.0 and also tried with adding the AlwaysSampler() bean. all did not work. Anyone know why?
What is the difference between a noopSpan and a LazySpan

Hello people !

We're going to use zipkin, and we're wondering which backend to choose between cassandra and elasticsearch. I couldn't find much information detailed for zipkin's usage, are there some guidelines somewhere to help us with this choice ? pros and cons, performances or whatever
cheers !

5 replies
How come an autowired brave Tracer is null ?
4 replies
Priya Sharma
I am facing issues from with cassandra3 and zipkin setup while ssl is enables in config.Its doesnt accept javax.net.truststore and javax.net.keystore variables, Any help with Zipkin - > cassandra with ssl setup?? It works well without ssl.
@nitishgoyal13: Did u set up ssl communication with Cassandra?
Hyounggyu Choi
Hello, I am sorry to keep asking for a review for my PR (openzipkin/zipkin#3384), but I haven't been able to get my PR to the next step (or make any meaningful progress) for a month. (I only got only one LGTM sign from the reviewers. Thanks, Tommy Ludwig.) Could someone help me?
9 replies
Gabriel Fairbanks

Hello, apologies if it's a stupid question, but we've instrumented a chain of services such as A -> POST -> B -> POST -> C. My tracing json contains details for this chained call, but on the Zipking UI I can only see the calls from A -> B.

Would be correct to expect that call B -> C should be shown on the same zipkin screen, as a child node of service A?

5 replies
Sharipov Shohruh
Hello everyone, can not a traces from zipkin ui, getting connection refused error, any suggestions?
Hello, I'm just new to zipkin and blown away using the camel integration. But I'm not findig any information on how to attach mysql also (not as storage). Anybody having a pointer?
Hi guys, can we use kafka as storage for traces and can zipkin query kafka ? Anyone implemented this, if so what are the pros and cons of this
1 reply
Terence Marks
Hey, is it possible to use the ui to search for spans by trace id?
Jonatan Ivanov
There is a dedicated input field for that in the top right corner.
Or you can put your traceid into your url :)
José Carlos Chávez
11 replies
zipkin-server use log4j2.
Jiwoong Park
Hi, I deploy zipkin with k8s,
when i use kafka with jaas, Exception thorws like this
wihtout JAVA_OPT env, It works well
What can i do?
4 replies
José Carlos Chávez
@/all Zipkin 2.23.14 is out now, including the patches for log4j security fix. Thanks @llinder for the persistence.
Bas van Beek

sudo docker run -d -p 9411:9411 \ -e ES_HOSTS= \ -e ES_INDEX=tracing \ -e STORAGE_TYPE=elasticsearch \ --name=zipkin_test openzipkin/zipkin

The container does not start, it stays in "starting" mode in portainer, then it stops. are the env-var ES_HOSTS and ES_INDEX supported by default in the image?

2 replies
José Carlos Chávez
Zipkin 2.23.15 is out including log4j 2.17.
Israel Perales
@jcchavezs thanks man, sorry for my bad english, but exist an official post or whatever with this information ?
Hello all. In my tests (v2.23.15), using the ES_INDEX_SHARDS env only takes effect when using ES_INDEX along with it. Only ES_INDEX_SHARDS does not change the number of shards. Is it meant to be like this?
Christopher Cox
I'm trying to use zipkin zipkin-cassandra with podman-compose on CentOS 8. The docker-compose-dependencies.yml wants to setup a cron job, but maybe it assumes we've somehow got cron in our container already. Is there an easy way to "fix" this podman-compose wise? Error we get: podman start -a dependencies
Error: unable to start container 8ba0b42f90cb60d00b74fe9d85018b9da16a79fcb3f6bba08b480aaa55ffae20: container_linux.go:380: starting container process caused: exec: "crond -f": executable file not found in $PATH: OCI runtime attempted to invoke a command that was not found
2 replies
Christopher Cox
Answering my own question: For podman-compose, it's handling of entrypoint was lacking in CentOS 8 (though maybe fixed in a future version). For now I changed the entrypoint in docker-compose-dependencies.yml to: entrypoint: [ 'crond', '-f' ]
Christopher Cox
Using zipkin cassandra docker-compose with the Centos 8 podman fix, the batch "dependencies", while present and running, doesn't seem to actually do anything. Hints on troubleshooting this?
7 replies
Christopher Cox
I tried enabling 9042:9042 on storage for debugging purposes.... regardless using cqlsh, I cannot connect to cassandra: Connection error: ('Unable to connect to any servers', {'': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
2 replies
Nikoloz Kvaratskhelia
Hi, I'm trying to setup zipkin with TLS. I'm running it as a docker-compose service with the following env var: - JAVA_OPTS=-Darmeria.ssl.key-store=/zipkin/keystore.p12 -Darmeria.ssl.key-store-password=password -Darmeria.ssl.enabled=true -Darmeria.ports[0].port=9411 -Darmeria.ports[0].protocols[0]=https The server starts up and the UI works with https, but zipkin logs Caused by: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST every 5 seconds. Any ideas what I'm doing wrong?
9 replies
Christopher Cox
I'm trying to get the dependencies jar (cron) to work, I'm using: STORAGE_TYPE=elasticsearch ES_HOSTS=http: ES_HTTP_LOGGING=BASIC ES_NODES_WAN_ONLY=true java -jar zipkin-dependencies.jar
3 replies
Error I get follows:
22/01/05 10:26:59 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2022-01-05/span
22/01/05 10:26:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.ExceptionInInitializerError
        at org.elasticsearch.hadoop.util.Version.<clinit>(Version.java:66)
        at org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:216)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions$lzycompute(AbstractEsRDD.scala:79)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions(AbstractEsRDD.scala:78)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.getPartitions(AbstractEsRDD.scala:48)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.immutable.List.map(List.scala:296)
        at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
        at org.apache.spark.rdd.RDD.groupBy(RDD.scala:713)
        at org.apache.spark.api.java.JavaRDDLike$class.groupBy(JavaRDDLike.scala:243)
        at org.apache.spark.api.java.AbstractJavaRDDLike.groupBy(JavaRDDLike.scala:45)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:189)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:170)
        at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:79)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field protected byte[] java.io.ByteArrayInputStream.buf accessible: module java.base does not "opens java.io" to unnamed module @5274766b
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
        at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
        at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
        at org.elasticsearch.hadoop.util.ReflectionUtils.makeAccessible(ReflectionUtils.java:70)
        at org.elasticsearch.hadoop.util.IOUtils.<clinit>(IOUtils.java:53)
        ... 28 more