by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 13:35
    dmfrey opened #3141
  • 13:35
    dmfrey labeled #3141
  • 13:16
    jorgheymans commented #3133
  • 13:15
    tacigar commented #3133
  • 13:15
    tacigar commented #3133
  • 13:09
    tacigar commented #3133
  • 12:22
    tacigar commented #3076
  • 12:20
    tacigar edited #3076
  • 11:34
    Travis tacigar/zipkin (tacigar/search-bar) still failing (330)
  • 11:33
  • 11:08
    tacigar synchronize #3076
  • 10:20
    jorgheymans closed #3140
  • 10:20
    jorgheymans commented #3140
  • 10:19
    jorgheymans unlabeled #3140
  • 10:19
    jorgheymans labeled #3140
  • 09:54
    007anwar labeled #3140
  • 09:54
    007anwar opened #3140
  • 00:01
    adriancole transferred #3139
  • Jul 08 23:59
    adriancole commented #45
  • Jul 08 23:41
    adriancole commented #3139
顾凡
@GuideFourthW
How to match these two parameters in JAVA
image.png
image.png
JY junyin
@jyjunyin_gitlab
image.png
@jcchavezs $tracer->getCurrentSpan() I used this but always return null
JY junyin
@jyjunyin_gitlab
the reason is, I want those info such as tracId, spanId ... for write it to logs file
am using Monologs formatted to formatting logs data
顾凡
@GuideFourthW
what is kind
what does it mean when server and client appear
José Carlos Chávez
@jcchavezs
@jyjunyin_gitlab do you open a scope with the current span? Can I see your tracing code?
JY junyin
@jyjunyin_gitlab
image.png
@jcchavezs
José Carlos Chávez
@jcchavezs
So that is the tracing but then you have to add a Middleware into laravel. I think I can come up with an example in the next days because this is interesting to me too.
Stephen Attard
@sacasumo
Hey guys, (noob) clarification question about the docs for zipkin-dependencies which read: "This process is implemented as an Apache Spark job. This job parses all traces in the current day in UTC time. This means you should schedule it to run just prior to midnight UTC." If the job takes a couple of hours to complete (testing it right now, started ~1.5 hours ago, still ongoing) should I schedule it to start just before midnight, or end?
Jorg Heymans
@jorgheymans
@sacasumo you have 2 options:
  1. don't specify the date parameter, by default it will use data of 'today' only so it's in your best interest to schedule it as close to midnight as possible in order to have the complete picture. We have it scheduled a few minutes before midnight.
  2. specify the date parameter yourself when invoking, so that you are more flexible in the time when you want to run the job. That way you can for example start it at 3am and parse previous days' data
If the job has been running for 1.5 hrs you must have an insane amount of data, or you did not size -Xmx properly and it's just GC'ing all the time. Do you have an idea on the index size ?
Stephen Attard
@sacasumo
Thanks for the update, and definitely not an insane amount of data, so I must be doing something wrong. 1G per day for this particular environment. I am, though, bouncing via a signing proxy (https://github.com/abutaha/aws-es-proxy) , although I don't see that resource-bound in any way. I've set an ECS task with 8G mem and 7G of Xmx
index size = 1G per day.
Stephen Attard
@sacasumo
@jorgheymans what do you think should be reasonable values?
Jorg Heymans
@jorgheymans
for 1G index size 7G xmx is ample, so perhaps it's just your storage which is slow in delivering the data. Note that the spark job actually pulls in all of the data it needs to index ...
@sacasumo ^^^
Stephen Attard
@sacasumo
thanks @jorgheymans . For what it's worth, the logging of the proxy is giving a constant stream of the follow log,
2020-07-08T17:20:27+02:00] (aws-es-proxy/aws-es-proxy/53b51acc-db3a-40d7-aed1-f01ce7e62ba0) 2020/07/08 15:20:27 -> POST; 10.196.1.101:60095; /_search/scroll?scroll=5m; {"scroll_id":"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAEgWcEMxYi0zX2JSQktST2cteG1WMzVBdw=="}; 200; 0.135s
and the network traffic is negligible, so it does feel like the cluster is slow in serving the traffic, or there is another bottleneck I have yet to identify.
Jorg Heymans
@jorgheymans
There is a debug flag on the spark job that you can enable, https://github.com/openzipkin/zipkin-dependencies#troubleshooting .. that outputs all the spans it retrieves to stdout. Looking at the rate at which it outputs these could give you certainty that fetching is really slow. On my side we have index size 5G finishes in about 15-20 minutes (not AWS)
Daniel Frey
@dmfrey
I'm attempting to build zipkin-server container images for multi-arch deployment. In Spring Boot 2.3, you can enable layers when the executable jar gets built. I can then use Docker Edge to build for amd64, arm64 and arm/v7. It all seems to work. My multi-step docker file builds the images just fine. However, when I attempt to launch a deployment on a k8s cluster, it's failing to launch with this exception java.lang.ClassNotFoundException: org.springframework.boot.loader.JarLauncher. Is something missing in the build of the jar? Some flag I maybe need to set to make sure it's an executable jar that got built?
Adrian Cole
@adriancole
@dmfrey we treat spring boot as an implementation detail of the server, and so don't encourage custom images that assume boot things work. this is compounded by the fact that we have derived images ex for certain clouds. We've not been asked for docker edge builds of the server so far by an end user, and we have a sort of thing where we don't dig infrastructure ditches until a few sites actually want something. maybe you can raise an issue (https://github.com/openzipkin/zipkin/issues/new/choose), then when there's more interest someone can look at the impact and any invalidation this causes on zipkin-gcp and -aws images? So far the main thing folks have wanted were small images, without an assumption of a base one.
for pointers on your specific topic, we have a custom layout and also custom things inside of it to keep the heft of the image to a minimum. I'd be unsurprised if some of the multi-layer stuff isn't expecting this.
and we launch somewhat differently to allow our other derived images to work https://github.com/openzipkin/zipkin/blob/master/docker/zipkin/run.sh depending on if the build is slim or exec
hope this helps!
Ucky.Kit
@uckyk
Where can I see what these packages mean?
Such as zipkin-repoter, zipkin-repoter-brave
What is the difference? I’m reading the source code(about how to compute the span 2.x) recently, and some packages are more difficult to read because I don’t know what each package does.
Don't know where to start reading ,thx
Adrian Cole
@adriancole
@uckyk if you use brave the api is the same for a long time
you don't need to focus on this part.. mainly focus on the apis to produce the data
ex data format can only talk about static things
lifecycle etc is covered in the apis that produce the data
so focus on this part is my advice and just ignore the data format
Here are the most relevant links from the OpenZipkin Brave project: Brave’s core library Baggage (propagated fields) * HTTP tracing
but there are many things, main thing is decide what you are trying to do. "understand everything" is too undirected
so maybe discuss what you are trying to do.. do you have old code you need to change, are you using old version of brave from 2015? this can help me help steer you to the best docs
brave 4+ is based on the zipkin v2 model, so if the link I sent you yesterday doesn't help, maybe looking at how the model is used (ex brave Span api) can
Ucky.Kit
@uckyk
We just set up the zipkin service for other departments to use. Recently, people often asked me how each service node calculates the span and how to send it to kafka. So I want to see the specific code and read it as an understanding.
Adrian Cole
@adriancole
ok so you need to know what code is in use, as the server doesn't calculate this anymore
client does
Ucky.Kit
@uckyk
yeap ,I will continue to read the source code,thx ,
Stephen Attard
@sacasumo
:point_up: July 8, 2020 5:48 PM @jorgheymans a more recent run on just 1 days' worth of data took 8 minutes on the scheduled cron. I think it had to process around ~8 days worth of spans when it took ~2 hours.
Jorg Heymans
@jorgheymans
@sacasumo cool, my rule of thumb is that you need about twice the amount of memory as the volume of span data being analyzed. See https://github.com/openzipkin/zipkin-dependencies/issues/143#issuecomment-637031335
Stephen Attard
@sacasumo
Fantastic, thank you :bow: appreciate your help in this.
Jorg Heymans
@jorgheymans
no worries :thumbsup:
Daniel Frey
@dmfrey
@adriancole I did add an enhancement request to track multi-arch image builds (if, in fact, it is of interest to anyone else) openzipkin/zipkin#3141
In the meantime, I'll take a look at the other dockerfiles and see if I can't construct a suitable ENTRYPOINT to get these images to startup