also there's a "trace breaker" concept we are discussing
which basically would say that given your ability to demarcate what you think the boundaries are.. the trace would reastart with a tag to the prior
big question here though is..
is the trace good, but just the view bad?
or would separate traces be ideal for you
By filtering I see the filtered service and kafka. Hmm good question. We have so many services that I think seperate traces would be better but than also with the trace breaker you mentioned.
if trace broke it would be corrlation proble
so say you had one message turns into 18 processors
from producer to kafka is traceID
then the 18 consumers into their processors trace ID would be a tag
so ex the UI search you'd have multiple stage
depending on what you want this is one of the dillemmas of trace break
trace ID becomes correlation field.. then what happens if broken multiple times!
another way possibly is to have partitioned service graph
this assumes that the "reroot" feature of trace view screen is ok
ex that the large trace is ok as you can doubleclikc on a node to focus on that subtree
if there's a limited amount of partitions the job can basically multiply out
say you have 16 services that process, we could figure a way to re-apply the graph for the consumers
feels like at that point 17 graphs overlayed on top eachother
but it can be decomposed
problem is it is pretty dynamic so hard to do that in a way as easy as normal dep graph
(just talking through stuff)
another way possibly is to have an on-demand dependency graph
for example, if you aren't trying to see a day traces
if you just want to see the connectivity visualy for a trace or several of them, it can be made on-demand and probalby many ways
ex "I want to see dependncy graph for these 10 traces"
that is pretty easy technically as the data is so small
the browser can easily handle it
so anyway if you have some feedback about what you are looking for in the dep graph, like how it is serving you now.. if daily granularity is important enough or if you'd have same value with a graph over 100 or less traces..
would be interesting to help guide
It looks a bit that the traces are interrupted as soon as we get a message from kafka or send it to kafka. Like the headers are not being used. Maybee thats the problem why our graph looks so useless.
sounds like different thing then.. always best to make sure we are comparing something not broken vs future state :D
Indeed currently I don't see a single trace where kafka is in the middle. It's either the first or the last service.
yep in this case the traces are heh already tracebreaker'd :D
feature not a bug right? :D
Exactly. We may have a problem in an API that we created and used in all our services to deserialize messages. Looks like we are creating an AOP proxy that hinders Sleuth to step into that.
@jcchavezs parent is only used for RPC spans (shared/joined) so there's no need to propagate that in messaging. this is because the propagated spanId will become the next span's parent and we have no field for grandparentId. make sense?
Hi Team, Any idea on how to push traces from zipkin to prometheus?
Hi, is there a way to have custom trace ID, like ABC-ID
José Carlos Chávez
@mbrade not really. TraceID is 64bit or 128bit hex. If you need to correlate a trace with another ID you can add it as a tag and it is still possible to search by that tag
@amanjhadbg you can make metrics out of traces, but promethues doesn't directly accept traces as far as I know
José Carlos Chávez
@adriancole am I right to assume one can't inject both single and multi b3? Do I need to choose one?
no you can
the names don't conflict so you can do that and people migrating might want to for a while. thats why in brave it is possible to do this