Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 11:28
    jcchavezs commented #1337
  • 11:26
    jcchavezs commented #1338
  • 11:23
    jcchavezs commented #1340
  • 11:22

    jcchavezs on master

    fix docker readme compose links… (compare)

  • 11:22
    jcchavezs closed #3469
  • 11:07
    jcchavezs synchronize #3467
  • 11:01
    jcchavezs commented #3471
  • Aug 18 12:04
    codefromthecrypt commented #3471
  • Aug 18 12:04

    codefromthecrypt on master

    Remove exception on traceIds of… (compare)

  • Aug 18 12:04
    codefromthecrypt closed #3471
  • Aug 18 08:55
    rogierslag opened #3471
  • Aug 18 07:47
    ittce opened #531
  • Aug 16 07:12
    QualoZe0t commented #202
  • Aug 15 15:11
    llinder assigned #3470
  • Aug 15 10:44
    jcchavezs commented #3470
  • Aug 15 04:27
    ringerc opened #3470
  • Aug 15 04:27
    ringerc labeled #3470
  • Aug 12 11:51
    ikaila opened #3469
  • Aug 12 08:14
    jcchavezs commented #202
  • Aug 12 07:34
    QualoZe0t commented #202
José Carlos Chávez
11 replies
zipkin-server use log4j2.
Jiwoong Park
Hi, I deploy zipkin with k8s,
when i use kafka with jaas, Exception thorws like this
wihtout JAVA_OPT env, It works well
What can i do?
4 replies
José Carlos Chávez
@/all Zipkin 2.23.14 is out now, including the patches for log4j security fix. Thanks @llinder for the persistence.
Bas van Beek

sudo docker run -d -p 9411:9411 \ -e ES_HOSTS= \ -e ES_INDEX=tracing \ -e STORAGE_TYPE=elasticsearch \ --name=zipkin_test openzipkin/zipkin

The container does not start, it stays in "starting" mode in portainer, then it stops. are the env-var ES_HOSTS and ES_INDEX supported by default in the image?

2 replies
José Carlos Chávez
Zipkin 2.23.15 is out including log4j 2.17.
Israel Perales
@jcchavezs thanks man, sorry for my bad english, but exist an official post or whatever with this information ?
Hello all. In my tests (v2.23.15), using the ES_INDEX_SHARDS env only takes effect when using ES_INDEX along with it. Only ES_INDEX_SHARDS does not change the number of shards. Is it meant to be like this?
Christopher Cox
I'm trying to use zipkin zipkin-cassandra with podman-compose on CentOS 8. The docker-compose-dependencies.yml wants to setup a cron job, but maybe it assumes we've somehow got cron in our container already. Is there an easy way to "fix" this podman-compose wise? Error we get: podman start -a dependencies
Error: unable to start container 8ba0b42f90cb60d00b74fe9d85018b9da16a79fcb3f6bba08b480aaa55ffae20: container_linux.go:380: starting container process caused: exec: "crond -f": executable file not found in $PATH: OCI runtime attempted to invoke a command that was not found
2 replies
Christopher Cox
Answering my own question: For podman-compose, it's handling of entrypoint was lacking in CentOS 8 (though maybe fixed in a future version). For now I changed the entrypoint in docker-compose-dependencies.yml to: entrypoint: [ 'crond', '-f' ]
Christopher Cox
Using zipkin cassandra docker-compose with the Centos 8 podman fix, the batch "dependencies", while present and running, doesn't seem to actually do anything. Hints on troubleshooting this?
7 replies
Christopher Cox
I tried enabling 9042:9042 on storage for debugging purposes.... regardless using cqlsh, I cannot connect to cassandra: Connection error: ('Unable to connect to any servers', {'': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
2 replies
Nikoloz Kvaratskhelia
Hi, I'm trying to setup zipkin with TLS. I'm running it as a docker-compose service with the following env var: - JAVA_OPTS=-Darmeria.ssl.key-store=/zipkin/keystore.p12 -Darmeria.ssl.key-store-password=password -Darmeria.ssl.enabled=true -Darmeria.ports[0].port=9411 -Darmeria.ports[0].protocols[0]=https The server starts up and the UI works with https, but zipkin logs Caused by: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST every 5 seconds. Any ideas what I'm doing wrong?
9 replies
Christopher Cox
I'm trying to get the dependencies jar (cron) to work, I'm using: STORAGE_TYPE=elasticsearch ES_HOSTS=http: ES_HTTP_LOGGING=BASIC ES_NODES_WAN_ONLY=true java -jar zipkin-dependencies.jar
3 replies
Error I get follows:
22/01/05 10:26:59 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2022-01-05/span
22/01/05 10:26:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.ExceptionInInitializerError
        at org.elasticsearch.hadoop.util.Version.<clinit>(Version.java:66)
        at org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:216)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions$lzycompute(AbstractEsRDD.scala:79)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions(AbstractEsRDD.scala:78)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.getPartitions(AbstractEsRDD.scala:48)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.immutable.List.map(List.scala:296)
        at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
        at org.apache.spark.rdd.RDD.groupBy(RDD.scala:713)
        at org.apache.spark.api.java.JavaRDDLike$class.groupBy(JavaRDDLike.scala:243)
        at org.apache.spark.api.java.AbstractJavaRDDLike.groupBy(JavaRDDLike.scala:45)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:189)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:170)
        at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:79)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field protected byte[] java.io.ByteArrayInputStream.buf accessible: module java.base does not "opens java.io" to unnamed module @5274766b
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
        at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
        at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
        at org.elasticsearch.hadoop.util.ReflectionUtils.makeAccessible(ReflectionUtils.java:70)
        at org.elasticsearch.hadoop.util.IOUtils.<clinit>(IOUtils.java:53)
        ... 28 more
Christopher Cox
So, while I'm using the bundled java from elasticsearch for it and zipkin, I had to run the zipkin dependency jar using 8. Sigh.

Hi All,

I’m wondering if anyone has set up distributed tracing using Zipkin in AWS where spans from different AWS accounts running their own services are collected and exported to a common AWS ElasticSearch cluster (or another AWS storage), so that we can view the entire traces across all the different services. And in terms of the architecture design, is it recommended to have one collector for all services, or each service having their own collector running as sidecar container/process?

1 reply
There is a log4j vulnerability issue CVE 2021 44228 has this been patched ?
1 reply
hi all, I am trying to solve an issue with a zipkin server 2.23.16 deployed on EKS: traces coming from my local spring boot 2.6.2 instance arrive, traces I post from a pod with curl from the same namespace arrive but traces from the same spring boot application deployed on EKS do not arrive... I wonder what I should put in debug (and how) to see what's happening. I have deployed the official docker image of zipkin-server with no customization, only env variables for the cassandra storage info. Note: I had to specify zipkin.sender.type: web for the local application to post successfully the spans though (this change is on EKS too for my tests but to no avail).
Enal Mulla
Hi All,
I'm trying to create only one trace for one request, (in client side, sending the request to server side, and then make another calls)
I got all the calls in the same trace, but not in the correct order, as you can see in the photo, (validate API post request till publish request, it's need to be under run command)
2 replies

Hello All,

is there anyway to store/save request and response payloads along with trace and span info in storages like elastic or cassandra?

1 reply
Hello everyone,
I have a springboot application with rabbitmq, and I see the spans after consuming start are all missing
using brave lib, any idea?
Achim Grolimund

Hey all

is it posible to run a "Zipkin Viewer" localy and upload an tracing File that hase more then 10k Spans? i use SignalFx but i cant show so many data in the UI.

José Carlos Chávez
Zipkin finally got helm charts! https://github.com/openzipkin/zipkin#helm-charts
Balu Alla

Hi everyone, I am trying to deploy zipkin(version 2.23.16) on Kubernetes [1.21 version] and want to use elastic search as storage. I'm getting the below error. Could someone please help me

com.linecorp.armeria.server.RequestTimeoutException: null
    at com.linecorp.armeria.server.RequestTimeoutException.get(RequestTimeoutException.java:36) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.internal.common.CancellationScheduler.invokeTask(CancellationScheduler.java:467) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.internal.common.CancellationScheduler.lambda$setTimeoutNanosFromNow0$13(CancellationScheduler.java:293) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.common.RequestContext.lambda$makeContextAware$3(RequestContext.java:547) ~[armeria-1.13.4.jar:?]
    at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384) [netty-transport-classes-epoll-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at java.lang.Thread.run(Unknown Source) [?:?]


2 replies
Hello and happy Cny, is there any guide or tutorial on how to wrap and use zipkin native tracer inside opentelemetry?
i just used tracer := zipkinotp.Wrap(nativeTracer) from "github.com/openzipkin-contrib/zipkin-go-opentracing" but tutorial is not exhaustive on how to continue
5 replies
Beniamin Kalinowski
Hi, is there any prefered way to deploy Zipkin with custom authentication layer on top?
Shubhangi Agarwal
Hi, i'm using zipkin (2.23.2) and elasticsearch (7.16) as storage. But there is some issue with the tag query in the zipkin ui. The traces are not coming if I'm giving query for tags that I added, for eg. "tagQuery=event=Configure", but it'll come for the default tags, for eg. "tagQuery=channel=stateChangeReqReceiver". I am able to search with elasticsearch query. I also tried zipkin-dependencies. tagquery is really needed for our project. Are there any flag settings which should be enabled?
hi, is there any config to turn off hostname validation in Zipkin-> Cassandra Storage type. I could enable ssl but gets Hostname Validation failed error since our test level JKS doesn't contain SAN entries. Talking to Support team at Datastax stated to have the configuration turned off for hostname validation but i dont see any config in Zipkin properties.
Hi All, I am unable to start app which has sleuth + zipkin and IBM MQ(jms)
Any work around to make zipkin work for tracing spring and MQ communication
Mayank Srivastava
Hi Folks. I am trying to troubleshoot problems connecting our zipkin-traced (via kamon-zipkin) application (deployed on GKE) send span data to zipkin-gcp (within the same GKE cluster) on Google cloud inorder to send them over to Cloud Trace. I am experiencing problems to get the pod to even get past health check successfully. When looking up the health on \health endpoint all I see is
 "StackdriverStorage{<<GCP-PROJECT>>}" : {
        "status" : "DOWN",
        "details" : {
          "error" : "com.linecorp.armeria.common.grpc.protocol.ArmeriaStatusException: The caller does not have permission"
Hello, does anyone know why the HttpClientInstrumentation for ASP.NET doesn't record any error description even if RecordException is set to true? The error tag is added to the span automatically in case of a BadRequest("error message"), but the error message is missing inside the span and therefore not displaying in Zipkin UI.
Hi everyone, I would like to build using Zipkin V2.23.16 in IDEA. But who can help me with the following problem? Description: Missing zipkin2.proto3

Hi all,
here is my zipkin config for elasticsearch use for kubernetes environment.


        - name: STORAGE_TYPE
          value: elasticsearch
        - name: ES_HOSTS
          value: http://<host>:9200
        - name: ES_INDEX
          value: zipkin.uat

when i use storage type as in memory ot worked and can see traces and dependency tree. with above config im unable to see traces or dependency tree and gives following error

at zipkin2.elasticsearch.internal.client.HttpCall.lambda$parseResponse$4(HttpCall.java:265) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.elasticsearch.internal.client.HttpCall.parseResponse(HttpCall.java:275) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.elasticsearch.internal.client.HttpCall.doExecute(HttpCall.java:166) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.Call$Base.execute(Call.java:391) ~[zipkin-2.23.16.jar:?]

any idea what i'm missing ?

when the elasticsearch' container_name has a underline,like 'elastictsearch_test', zipkin compose file environment -e ES_HOSTS=http://elasticsearch_test:9200, zipkin will WARN: InitialEndpointSupplier : Skipping invalid ES host http://elasticsearch_test:19070, then ES connection refused.
Hi, all,I couldn't find services on Zipkin, can anyone help? Here are my springboot project config file, and pom denpendency:
        name: springboot-wy
    zipkin.base-url: http://localhost:9411/
    sleuth.sampler.probability: 1.0
1 reply
the zipkin server runs well, and the springboot project log was like :
2022-03-16 19:08:16.155  INFO [springboot-wy,fa581971f82f11e4,fa581971f82f11e4,true] 15876 --- [io-8089-exec-10] c.e.d.controller.JerseyHelloController   : hello==============
2022-03-16 19:08:16.396  INFO [springboot-wy,33cca1964589dcf4,33cca1964589dcf4,true] 15876 --- [nio-8089-exec-8] c.e.d.controller.JerseyHelloController   : hello==============