Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 07:30
    jorgheymans commented #194
  • Mar 07 22:13
    dankolt commented #194
  • Mar 07 16:42
    jorgheymans commented #194
  • Mar 07 14:35
    wusuobuzai opened #3344
  • Mar 07 08:37
    asarkar commented #820
  • Mar 07 08:37
    asarkar commented #820
  • Mar 07 08:35
    jiraguha commented #820
  • Mar 07 08:07
    jiraguha commented #820
  • Mar 07 00:14
    asarkar commented #820
  • Mar 06 22:21
    asarkar commented #820
  • Mar 06 21:55
    jiraguha commented #820
  • Mar 06 18:31
    jiraguha commented #820
  • Mar 06 10:30
    dankolt opened #194
  • Mar 06 06:40
    jiraguha commented #820
  • Mar 06 06:39
    jiraguha commented #820
  • Mar 06 06:27
    falenn closed #123
  • Mar 06 06:27
    falenn commented #123
  • Mar 05 21:10
  • Mar 05 21:05
    kujenga commented #37
  • Mar 05 21:05
    kujenga opened #188
Adrian Cole
@adriancole
@/all I rarely ping the whole channel, but as so many ask us for support on sleuth, those interested should probably read ^^ and comment back on the issue or on sleuth's channel https://gitter.im/spring-cloud/spring-cloud-sleuth
Dimitrios Klimis
@dimi-nk

Hi. I made a similar question a few weeks regarding support of grpc-trace-bin and X-Cloud-Trace-Context. Just for context my company currently uses a) an internal Java framework for services b) OpenCensus APIs directly and c) and some internal custom tracing wrappers while my team uses Spring Boot and Sleuth/Brave. I've opened a discussion inside my company to adopt more standard propagation headers like B3. In the meantime, my team needs to support grpc-trace-bin and X-Cloud-Trace-Context.

I've been looking through brave's code to understand what would be the most efficient way to support these propagation formats without re-writing the world. One option would be to write new gRPC interceptors and HTTP filters but this option misses no lots of code already provided by Brave. If I understand the framework correctly, I should write a Propagation similar to B3Propagation which would extract the headers. I should then provide my propagation in the form of a FactoryBuilder bean to replace the default B3 bean. This will eventually make it to TracingServerInterceptor via RpcServerHandler. Is my assumption correct? If yes, any pointers to avoid mistakes? Is there an easier way to use the aforementioned propagation headers?

25 replies
Adrian Cole
@adriancole
screenshot on https://zipkin.io/ updated thx @jorgheymans
Adrian Cole
@adriancole
biggest release of the year and also why folks have been so busy. read all about it https://twitter.com/zipkinproject/status/1320676019951448064 https://github.com/openzipkin/zipkin/releases/tag/2.22.0
Jorg Heymans
@jorgheymans

biggest release of the year and also why folks have been so busy. read all about it https://twitter.com/zipkinproject/status/1320676019951448064 https://github.com/openzipkin/zipkin/releases/tag/2.22.0

Congratulations on getting this one out of the door @adriancole !! :fireworks: :fireworks: :fireworks:

adriancole @adriancole says you are welcome
krz1997
@krz1997
Hi,gays. Does anyone know how Zipkin defines a custom object type ?
@adriancole
I thought of customizing the TAG type to be NESTED rather than object
Adrian Cole
@adriancole
you will run into problems with dotted tags, which is why we can't use that approach
krz1997
@krz1997
thanks , i will research that.
According to your suggestion, I use ES to store data, which way should I take is more scientific
3 replies
XiaoChuangGitHub
@XiaoChuangGitHub
What do I need to do if I want to use RocketMQ and PostgresQL?Is there an example?
13 replies
Wai Mun
@iworkforthem_twitter
Is there a way to export zipkin's In-Memory dataset?
7 replies
77_Bala_77
@Ba777
Hi all, Is there a tool to change all the Zipkin format span records to AWS X-ray format?
4 replies
77_Bala_77
@Ba777
I have all the Zipkin compatible span records in a text file. Just need to change that to AWS X-Ra specific format and send that to X-ray daemon process. @adriancole
77_Bala_77
@Ba777
How to convert the zipkin span records to AWS X-ray acceptable format? I have all the Zipkin generated span records in a text file. Thought?
2 replies
mlshankar
@mlshankar
hi
I have installed zipkin
how to enable zipkin in ingress gateway
what is the option to do
can some one help me
I am using istio 1.5
mlshankar
@mlshankar
@here
Also We are using go lang
JBelova
@JBelova
I created a span with name 'A' in one thread. Do some thing in scope. Other thread propagate(tracer.joinSpan) that trace 'A'. Do some thing in scope too. I can finish span only in second thread. How can I show in zipkin UI span 'A' with duration from bouth processes? Now in json view i have two spans with same name and id, but with different duration. And one of them is shared. In zipkin Ui I can see only duration from first span. While duration of all trace is bigger.
itsmjeu
@itsmjeu
Hi Guys, I'm interested in help the project, contributing for translations
I can help in Portuguese, maybe French and Dutch as well
I sent a message to Jorge and he redirected me to here
José Carlos Chávez
@jcchavezs
Awesome @itsmjeu I think the first thing to do is to start with a github issue?
itsmjeu
@itsmjeu
Ok I will need some help on that maybe later because is my first time I'm contributing to a public project
azeemdin
@azeemdin

hi, i m new to zipkin working on a POC. I started zipkin as below

java -DSTORAGE_TYPE=elasticsearch -DES_HOSTS=http://localhost:9200 -DSEARCH_ENABLED=true -jar zipkin-server-2.22.0-exec.jar --logging.level.zipkin2=DEBUG

Then I posted data via rest API (http://localhost:9411/api/v2/spans)

[
{
"id": "352bff9a74ca9ad2",
"traceId": "5af7183fb1d4cf5f",
"parentId": "6b221d5bc9e6496c",
"name": "get /api",
"timestamp": 1556604172355737,
"duration": 1431,
"kind": "SERVER",
"localEndpoint": {
"serviceName": "backend",
"ipv4": "192.168.99.1",
"port": 3306
},
"remoteEndpoint": {
"ipv4": "172.19.0.2",
"port": 58648
},
"tags": {
"http.method": "GET",
"http.path": "/api"
}
}
]

now I am able to search trace id: 5af7183fb1d4cf5f
but if i use "find a trace" option by providing service name as : backend it does not work. Am I missing a point here?
Adrian Cole
@adriancole
you are sending a single-span trace?
azeemdin
@azeemdin
yesm i got this example from the swagger (https://zipkin.io/zipkin-api/#/default/post_spans)
also tried with another example with multiple traces in a span
Adrian Cole
@adriancole
can you check browser to see if you are hitting this? openzipkin/zipkin#3211
anyway it is better to use real data we created example projects to help get started https://github.com/openzipkin/brave-example
azeemdin
@azeemdin
Thanks Andrian, I will check the links u provided
by the way I am unable to get it directly via API
http://127.0.0.1:9411/zipkin/api/v2/spans?serviceName=backend
it returns empty array
Adrian Cole
@adriancole
-DSTORAGE_TYPE=elasticsearch is incorrect these are supposed to be environment variables
unix style
ex. STORAGE_TYPE=elasticsearch java -jar...
azeemdin
@azeemdin
it is actually working with me, i am able to save data in elasticsearch, i was doing a very stupid mistake. In example timestamp was very old, i did't updated that. Just tried with new sample and it is working f9. Really sorry to disturb u for my stupid mistake
update sample
thanks for your concern and support
Adrian Cole
@adriancole
it isn't stupid mistake, it is just a problem with static data. don't know a better way with the swagger ui..