Hi, i am new to opentracing and i would like to use it along with spring cloud gateway. I am running a k8s cluster and have istio(jaeger) setup in there. Is there any concrete guideline/sample application through which i can make it work at my end. Currently, i have tried this and failed, and opened an issue accordingly:
P.s. This is a high priority thing for me so any help would be great. Since, distributed tracing won't work in our architecture if gateway is unable to propagate them to corresponding micro-services.
Thanks @yurishkuro, realising that was the case, I came here to find those who may have given it a go :sweat_smile:
Thanks @mwear, I've started playing around with that already. Still becoming familiar with the use of
injectors when building the client. At the moment I'm curious to see what approaches people have taken with regards to wrapping up their controllers, i.e. span creation and finishing.
Faraday is suggested in https://github.com/opentracing/opentracing-ruby and I'm successfully able to extra headers from my NGINX requests and start my spans in rails. Now I'm just trying to figure out the best way to ensure
span.finish is executed and my spans have relevant logs attached to them - possible attaching the same output of the rails logger to the
active_span before finishing it.
JdbcAspect. This aspect wraps connections with a
TracingConnection. So spans are created for database queries. So far so good. For Oracle databases this does not work. The connection throws an exception. How to handle this? I can fork and create a PR for a fix, however the ultimate fix should be with the oracle driver.
New to Opentracing.
We are using Jaeger client for tracing. We are using RabbitMQ messaging between two microservices.
Use-case : I want to propagate the span context to the other microservice via RabbitMQ.
I have used this dependency : opentracing-spring-rabbitmq-starter.
It works, but I do not have any control over the spans.
Is there a way to propagate the span context using inject and extract (without using the above mentioned dependency), similar to mentioned in the doc : https://opentracing.io/docs/overview/inject-extract/
I'm trying to add tracing to elasticsearch using
Seems like the request interceptor is passing the span using HttpContext and when it's being received by response interceptor then the span information is getting lost
I've created TracingHttpClientConfigCallback object like
TracingHttpClientConfigCallback callback = new TracingHttpClientConfigCallback(BraveTracer.create(tracing))
where I'm using brave-opentracing bridge to convert Sleuth Tracing to Opentracing object, which works fine, somehow the response interceptor is getting NoopSpan object with entirely new trace and span id
I've some flink jobs which uses kafka as source and sink and I want to add tracing to it, so that any message consumed/produced from/to Kafka is well traced, for that I'm using kafka interceptors to intercepts messages and log trace, span and parent traceId, for that I'm using
opentracing-kafka-client(v0.1.11) in conjunction with brave-opentracing(v0.35.1), the reason why I'm using custom interceptors because I need to log messages in a specified format.
After configuring interceptors they are getting invoked and it uses tracing information (from headers) coming from upstream system and logs it but when it comes to producing message again to kafka then tracing context is lost for instance consider below scenario
1) Message put on Kafka by some rest service
2) Message consumed by flink job and interceptors kicks in and uses tracing information from header and logs it
3) After processing message is produced by flink job to Kafka
It works well until step #2 but when it comes to producing message then tracing information from previous step is not used because it does not have any headers information and hence it produces entirely new trace.