by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 14:51
    literatureandyou opened #3102
  • 10:35
    Travis openzipkin/brave (master) broken (5071)
  • 10:19

    jorgheymans on master

    jdiff notes , like in the zipki… (compare)

  • 10:19
    jorgheymans closed #1223
  • 10:15
    jorgheymans closed #1221
  • 10:15

    jorgheymans on master

    add note about generating jdiff… (compare)

  • 10:15
    jorgheymans closed #3101
  • 10:14
    jorgheymans opened #1223
  • 10:07
    jorgheymans synchronize #3101
  • 03:27
    Travis openzipkin/zipkin-aws (release-0.21.2) passed (230)
  • 03:26

    adriancole on master

    [maven-release-plugin] prepare … (compare)

  • 03:26

    adriancole on 0.21.2

    (compare)

  • 03:26

    adriancole on master

    [maven-release-plugin] prepare … (compare)

  • 03:19

    adriancole on release-0.21.2

    (compare)

  • 03:18

    adriancole on master

    latest zipkin (compare)

  • 01:31
    adriancole commented #3097
  • 01:30

    adriancole on docker-2.21.3

    (compare)

  • 01:25
    adriancole commented #3097
  • 01:24

    adriancole on docker-2.21.3

    (compare)

  • 00:47
    adriancole commented #171
Adrian Cole
@adriancole
but sometimes a long span is the result of polling
but the actual tasks invoked are async
usually you, if have the choice anyway, wouldn't put a span around polling
Damien Nozay
@dnozay
if it was simple i wouldn't need opentracing :D
Adrian Cole
@adriancole
anyway hopefully these pointers help. I suspect if kubectl is taking minues
underneath it is polling
would be surprised for a blocking task to actually go that long
but I'm often surprised :D
Damien Nozay
@dnozay
i'm sure, i just want to be able to collect data and say to cloud provider X: "can you help improve operation Y, it takes too long in general"
Adrian Cole
@adriancole
and if the only abstraction you have is a black box around something internally polling yeah feel free to report that span twice.. once that it started and later with duration
but normal requests, like app requests.. mostly should just wait for completion
Damien Nozay
@dnozay
do i understand that if i report the same span twice, once w/o duration and later w/ duration, it will know how to handle that?
Adrian Cole
@adriancole
yep
Damien Nozay
@dnozay
good to know, thanks for the tip
Adrian Cole
@adriancole
we have tests for it
np
José Carlos Chávez
@jcchavezs
@dnozay I wonder if having a proxy in front of the kubernetes API would do the trick. Of course you would trace that proxy. I am just talking theoretically
Damien Nozay
@dnozay
image.png
I wish there was a way to have spans for loop iterations
José Carlos Chávez
@jcchavezs
Who are generating these traces?
Damien Nozay
@dnozay
@jcchavezs i am - but google's golang client are using opentracing, so passing the correct context to the client just magically adds stuff
or did I not get the question?
Pradnyil
@Pradnyilkumar_twitter
Is there any way to override parent span id
Not with the help of span context
Pradnyil
@Pradnyilkumar_twitter
Additionally do we have example of propagation wherein we pass span context through request header
José Carlos Chávez
@jcchavezs
I see. If you are not in control of the spans that are generated then probably the only way to do it is to modify them on reporting. Like before sending it to the tracing server, you modify the spans that are part of a loop to become a single span and every span marked as an annotation.
About "Google's golang client" is that published somewhere?
Adrian Cole
@adriancole
@Pradnyilkumar_twitter it is described here: https://github.com/openzipkin/brave/tree/master/brave#propagation
Adrian Cole
@adriancole
most instrumentation that need to parse headers already do that, so might be better to ask why you need to change the parent id/parse your own request https://github.com/openzipkin/brave/tree/master/instrumentation
ronniekk
@ronniekk
I have an issue with Zipkin not being able to render traces correctly after receiving a "finagle.flush" annotation (accompanied by a "ws" annotation). Not sure where the problem belongs to Finagle or Zipkin. Any pointers would be helpful.
It's 100% reproducible - I can download the spans from /api/v2/trace/{traceId}, remove any spans with "finagle.flush"+"ws" and upload the result directly in the UI and it displays correctly
ronniekk
@ronniekk
SpanRecorder is from io.zipkin.finagle2:zipkin-finagle
Adrian Cole
@adriancole
@ronniekk "ws" is wire send
ronniekk
@ronniekk
Yeah I know
Adrian Cole
@adriancole
so 2 minutes
you could verify by looking at the difference between timestamp and "ws"
it could be that the request was long, or somehow the client finish hook got lost somehow
ronniekk
@ronniekk
Well the response was back (to requester) in milliseconds
Adrian Cole
@adriancole
then seems like the finish hook lost
what client was it?
ronniekk
@ronniekk
Might be an issue with ThriftMux maybe
Adrian Cole
@adriancole
maybe
if it is consistent, possible to debug using zipkin-junit in a unit test
or look at the tracer logs and not use a real tracer
but yeah sounds then like a missing finish..
was it one-way RPC?
ronniekk
@ronniekk
well it is mostly hidden in Finagle async stuff
Adrian Cole
@adriancole
gotcha anyway if one-way could be a clue, but I have no real reason to blame on that