Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 12:23
    ren-jq101 opened #3477
  • Sep 24 14:15
    llinder commented #3475
  • Sep 24 14:11
    llinder assigned #3475
  • Sep 24 07:01
    jcchavezs commented #3475
  • Sep 23 12:24
    uzbekdev1 labeled #3476
  • Sep 23 12:24
    uzbekdev1 opened #3476
  • Sep 23 10:35
    jcchavezs commented #1341
  • Sep 23 08:20
    ajax-osadchuk-r commented #1341
  • Sep 22 20:15
    didoux opened #3475
  • Sep 22 20:15
    didoux labeled #3475
  • Sep 22 11:30
    jcchavezs commented #113
  • Sep 21 13:58

    jcchavezs on master

    fix some typos (#3464) Signed-… (compare)

  • Sep 21 13:58
    jcchavezs closed #3464
  • Sep 21 13:56

    llinder on release-2.23.19


  • Sep 21 13:55

    jcchavezs on release-2.23.19


  • Sep 20 14:17
    reta commented #3430
  • Sep 20 13:00
    emanzat commented #3430
  • Sep 15 00:57
    crispywalrus opened #113
  • Sep 13 13:21
    wilvdb commented #3460
  • Sep 13 13:21
    wilvdb commented #3460
Christopher Cox
I'm trying to use zipkin zipkin-cassandra with podman-compose on CentOS 8. The docker-compose-dependencies.yml wants to setup a cron job, but maybe it assumes we've somehow got cron in our container already. Is there an easy way to "fix" this podman-compose wise? Error we get: podman start -a dependencies
Error: unable to start container 8ba0b42f90cb60d00b74fe9d85018b9da16a79fcb3f6bba08b480aaa55ffae20: container_linux.go:380: starting container process caused: exec: "crond -f": executable file not found in $PATH: OCI runtime attempted to invoke a command that was not found
2 replies
Christopher Cox
Answering my own question: For podman-compose, it's handling of entrypoint was lacking in CentOS 8 (though maybe fixed in a future version). For now I changed the entrypoint in docker-compose-dependencies.yml to: entrypoint: [ 'crond', '-f' ]
Christopher Cox
Using zipkin cassandra docker-compose with the Centos 8 podman fix, the batch "dependencies", while present and running, doesn't seem to actually do anything. Hints on troubleshooting this?
7 replies
Christopher Cox
I tried enabling 9042:9042 on storage for debugging purposes.... regardless using cqlsh, I cannot connect to cassandra: Connection error: ('Unable to connect to any servers', {'': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
2 replies
Nikoloz Kvaratskhelia
Hi, I'm trying to setup zipkin with TLS. I'm running it as a docker-compose service with the following env var: - JAVA_OPTS=-Darmeria.ssl.key-store=/zipkin/keystore.p12 -Darmeria.ssl.key-store-password=password -Darmeria.ssl.enabled=true -Darmeria.ports[0].port=9411 -Darmeria.ports[0].protocols[0]=https The server starts up and the UI works with https, but zipkin logs Caused by: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST every 5 seconds. Any ideas what I'm doing wrong?
9 replies
Christopher Cox
I'm trying to get the dependencies jar (cron) to work, I'm using: STORAGE_TYPE=elasticsearch ES_HOSTS=http: ES_HTTP_LOGGING=BASIC ES_NODES_WAN_ONLY=true java -jar zipkin-dependencies.jar
3 replies
Error I get follows:
22/01/05 10:26:59 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2022-01-05/span
22/01/05 10:26:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.ExceptionInInitializerError
        at org.elasticsearch.hadoop.util.Version.<clinit>(Version.java:66)
        at org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:216)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions$lzycompute(AbstractEsRDD.scala:79)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions(AbstractEsRDD.scala:78)
        at org.elasticsearch.spark.rdd.AbstractEsRDD.getPartitions(AbstractEsRDD.scala:48)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at org.apache.spark.Partitioner$$anonfun$4.apply(Partitioner.scala:78)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.immutable.List.map(List.scala:296)
        at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:714)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
        at org.apache.spark.rdd.RDD.groupBy(RDD.scala:713)
        at org.apache.spark.api.java.JavaRDDLike$class.groupBy(JavaRDDLike.scala:243)
        at org.apache.spark.api.java.AbstractJavaRDDLike.groupBy(JavaRDDLike.scala:45)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:189)
        at zipkin2.dependencies.elasticsearch.ElasticsearchDependenciesJob.run(ElasticsearchDependenciesJob.java:170)
        at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:79)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make field protected byte[] java.io.ByteArrayInputStream.buf accessible: module java.base does not "opens java.io" to unnamed module @5274766b
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
        at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
        at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
        at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
        at org.elasticsearch.hadoop.util.ReflectionUtils.makeAccessible(ReflectionUtils.java:70)
        at org.elasticsearch.hadoop.util.IOUtils.<clinit>(IOUtils.java:53)
        ... 28 more
Christopher Cox
So, while I'm using the bundled java from elasticsearch for it and zipkin, I had to run the zipkin dependency jar using 8. Sigh.

Hi All,

I’m wondering if anyone has set up distributed tracing using Zipkin in AWS where spans from different AWS accounts running their own services are collected and exported to a common AWS ElasticSearch cluster (or another AWS storage), so that we can view the entire traces across all the different services. And in terms of the architecture design, is it recommended to have one collector for all services, or each service having their own collector running as sidecar container/process?

1 reply
There is a log4j vulnerability issue CVE 2021 44228 has this been patched ?
1 reply
hi all, I am trying to solve an issue with a zipkin server 2.23.16 deployed on EKS: traces coming from my local spring boot 2.6.2 instance arrive, traces I post from a pod with curl from the same namespace arrive but traces from the same spring boot application deployed on EKS do not arrive... I wonder what I should put in debug (and how) to see what's happening. I have deployed the official docker image of zipkin-server with no customization, only env variables for the cassandra storage info. Note: I had to specify zipkin.sender.type: web for the local application to post successfully the spans though (this change is on EKS too for my tests but to no avail).
Enal Mulla
Hi All,
I'm trying to create only one trace for one request, (in client side, sending the request to server side, and then make another calls)
I got all the calls in the same trace, but not in the correct order, as you can see in the photo, (validate API post request till publish request, it's need to be under run command)
2 replies

Hello All,

is there anyway to store/save request and response payloads along with trace and span info in storages like elastic or cassandra?

1 reply
Hello everyone,
I have a springboot application with rabbitmq, and I see the spans after consuming start are all missing
using brave lib, any idea?
Achim Grolimund

Hey all

is it posible to run a "Zipkin Viewer" localy and upload an tracing File that hase more then 10k Spans? i use SignalFx but i cant show so many data in the UI.

José Carlos Chávez
Zipkin finally got helm charts! https://github.com/openzipkin/zipkin#helm-charts
Balu Alla

Hi everyone, I am trying to deploy zipkin(version 2.23.16) on Kubernetes [1.21 version] and want to use elastic search as storage. I'm getting the below error. Could someone please help me

com.linecorp.armeria.server.RequestTimeoutException: null
    at com.linecorp.armeria.server.RequestTimeoutException.get(RequestTimeoutException.java:36) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.internal.common.CancellationScheduler.invokeTask(CancellationScheduler.java:467) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.internal.common.CancellationScheduler.lambda$setTimeoutNanosFromNow0$13(CancellationScheduler.java:293) ~[armeria-1.13.4.jar:?]
    at com.linecorp.armeria.common.RequestContext.lambda$makeContextAware$3(RequestContext.java:547) ~[armeria-1.13.4.jar:?]
    at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384) [netty-transport-classes-epoll-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.70.Final.jar:4.1.70.Final]
    at java.lang.Thread.run(Unknown Source) [?:?]


2 replies
Hello and happy Cny, is there any guide or tutorial on how to wrap and use zipkin native tracer inside opentelemetry?
i just used tracer := zipkinotp.Wrap(nativeTracer) from "github.com/openzipkin-contrib/zipkin-go-opentracing" but tutorial is not exhaustive on how to continue
5 replies
Beniamin Kalinowski
Hi, is there any prefered way to deploy Zipkin with custom authentication layer on top?
Shubhangi Agarwal
Hi, i'm using zipkin (2.23.2) and elasticsearch (7.16) as storage. But there is some issue with the tag query in the zipkin ui. The traces are not coming if I'm giving query for tags that I added, for eg. "tagQuery=event=Configure", but it'll come for the default tags, for eg. "tagQuery=channel=stateChangeReqReceiver". I am able to search with elasticsearch query. I also tried zipkin-dependencies. tagquery is really needed for our project. Are there any flag settings which should be enabled?
hi, is there any config to turn off hostname validation in Zipkin-> Cassandra Storage type. I could enable ssl but gets Hostname Validation failed error since our test level JKS doesn't contain SAN entries. Talking to Support team at Datastax stated to have the configuration turned off for hostname validation but i dont see any config in Zipkin properties.
Hi All, I am unable to start app which has sleuth + zipkin and IBM MQ(jms)
Any work around to make zipkin work for tracing spring and MQ communication
Mayank Srivastava
Hi Folks. I am trying to troubleshoot problems connecting our zipkin-traced (via kamon-zipkin) application (deployed on GKE) send span data to zipkin-gcp (within the same GKE cluster) on Google cloud inorder to send them over to Cloud Trace. I am experiencing problems to get the pod to even get past health check successfully. When looking up the health on \health endpoint all I see is
 "StackdriverStorage{<<GCP-PROJECT>>}" : {
        "status" : "DOWN",
        "details" : {
          "error" : "com.linecorp.armeria.common.grpc.protocol.ArmeriaStatusException: The caller does not have permission"
Hello, does anyone know why the HttpClientInstrumentation for ASP.NET doesn't record any error description even if RecordException is set to true? The error tag is added to the span automatically in case of a BadRequest("error message"), but the error message is missing inside the span and therefore not displaying in Zipkin UI.
Hi everyone, I would like to build using Zipkin V2.23.16 in IDEA. But who can help me with the following problem? Description: Missing zipkin2.proto3

Hi all,
here is my zipkin config for elasticsearch use for kubernetes environment.


        - name: STORAGE_TYPE
          value: elasticsearch
        - name: ES_HOSTS
          value: http://<host>:9200
        - name: ES_INDEX
          value: zipkin.uat

when i use storage type as in memory ot worked and can see traces and dependency tree. with above config im unable to see traces or dependency tree and gives following error

at zipkin2.elasticsearch.internal.client.HttpCall.lambda$parseResponse$4(HttpCall.java:265) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.elasticsearch.internal.client.HttpCall.parseResponse(HttpCall.java:275) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.elasticsearch.internal.client.HttpCall.doExecute(HttpCall.java:166) ~[zipkin-storage-elasticsearch-2.23.16.jar:?]
at zipkin2.Call$Base.execute(Call.java:391) ~[zipkin-2.23.16.jar:?]

any idea what i'm missing ?

when the elasticsearch' container_name has a underline,like 'elastictsearch_test', zipkin compose file environment -e ES_HOSTS=http://elasticsearch_test:9200, zipkin will WARN: InitialEndpointSupplier : Skipping invalid ES host http://elasticsearch_test:19070, then ES connection refused.
Hi, all,I couldn't find services on Zipkin, can anyone help? Here are my springboot project config file, and pom denpendency:
        name: springboot-wy
    zipkin.base-url: http://localhost:9411/
    sleuth.sampler.probability: 1.0
1 reply
the zipkin server runs well, and the springboot project log was like :
2022-03-16 19:08:16.155  INFO [springboot-wy,fa581971f82f11e4,fa581971f82f11e4,true] 15876 --- [io-8089-exec-10] c.e.d.controller.JerseyHelloController   : hello==============
2022-03-16 19:08:16.396  INFO [springboot-wy,33cca1964589dcf4,33cca1964589dcf4,true] 15876 --- [nio-8089-exec-8] c.e.d.controller.JerseyHelloController   : hello==============
Pierre Mevel
Hi everyone,
I'm trying to add tracing in an asynchronous library using Brave.
I'd like to have, for instance, a thread that starts a trace, and submit various tasks in a ForkJoinPool. When these tasks are executed, they would need to see their parent trace.
I'm having trouble the proper way to do this. For now, I'm passing the TraceContext of the parent to its children.
Could you point me towards the "official" way to handle this ?
Thanks in advance!

hi every one , when run docker-compose for mysql I got the issue

mysql | Starting MySQL
mysql | /usr/bin/mysqld_safe: line 1: my_print_defaults: not found
mysql | /usr/bin/mysqld_safe: line 1: my_print_defaults: not found
mysql | 220323 10:01:49 mysqld_safe Logging to '/mysql/data/11496caa278f.err'.

do you guys have experience with this error

my docker compose config:

version: '2'

image: openzipkin/zipkin-mysql
container_name: mysql

# Uncomment to expose the storage port for testing
  - ./database:/mysql/data
  - 3306:3306
Hi All, Have anyone here used Zipkin plugin in telegraf ?
José Carlos Chávez
I did
Hello Team,
Is there a way to view logs just like annotations and tags in zipkin. I am using py_zipkin lib. Appreciate any suggestions.
Esther Chukwunwike

Hi everyone,
I am trying to understand how to send spans with Zipkin in JSX to do some tracing. Below is what I am trying to achieve, it's just pseudocode but I can't really find documentation on how to do it.

What I am trying to achieve are:

  • Measure page load time (domContentLoaded - pageLoadStart)
  • generate span and send event to Zipkin server on domContentLoaded event

let tracer = new Tracer(...params)
let span = new Span(...)
span.start('page load start')

document.addEventListener('DOMContentLoaded', (event) => {
span.end('page load end')

@jcchavezs can you pls provide me the telegraf conf for Zipkin.
Hi All,
I am trying to understand what is meant by Depth in the tracing
Can someone help me pls
Hi. i now have a confusing problem that when i use elasticsearch.yml to start up the zipkin 。there is no service dependencies data generated.I checked all that the dependency cron service is up.How can i check is there any mistake and solve it ?
the docker-compose.yml configed is:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
   # Uncomment to see dependency processing logs
  # Uncomment to adjust memory used by the dependencies job
  # - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G
  - STORAGE_TYPE=elasticsearch
  - ES_HOSTS=elasticsearch:9200
  - storage
what i understood the zipkins tracing functionality is based on api invokes.
@unni-cs ,what i understood the zipkins tracing functionality is based on api invokes.
eg: the micro services A called B ,and B called C inner.
the depth is 3.
1 reply
@unni-cs ,it just like a tree log.
Hi, new user looking to install zipkin into a kubernetes cluster to run traces on an application running in the same kube cluster. Is it okay to use this docker image:
openzipkin/zipkin: The core server image that hosts the Zipkin UI, Api and Collector features.
and a pod with mysql and persistent storage to run this application?
1 reply
Rayhan Ali Muhammad
Hi! I'm very new to Zipkin and Golang here. I have running microservices that connects to each other using gRPC and I'm having difficulties in implementing the propagation to connect trace ID between services. Does anyone have a working example repository that I can take a look?

Hello, i'm trying to configure zipkin with Nodejs
i'm using appmetrics_zipking https://www.npmjs.com/package/appmetrics-zipkin
every thing is working fine locally,
when i try to connect to remote server which have zipkin installed on it the problem occurs
"Error sending Zipkin data FetchError: request to http://https://zipkin.sandbox.garment.link/:9411/api/v1/spans failed, reason: getaddrinfo EAI_AGAIN https"

here is my code
const appzip = require('appmetrics-zipkin')({
host: 'https://myremotehost/',
serviceName: 'monolith',
sampleRate: 1

3 replies