by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Adrian Cole
@adriancole
there's always chance for leak
main thing is less chance, the try/finally api makes it easy to see when people make leaky stuff
ex ever see something should be used in try/finally and carelessly doesn't close
if api isn't meant to be try/finally it is very hard to see where leak occurs
this is the technical reason why 2.x should be less leaks and generally is
Ucky.Kit
@uckyk
OK, I'll try to upgrade version 2. X and see if there are similar problems. Thank you very much!
Adrian Cole
@adriancole
sure 2.2.3 is out now, so try (it works with spring boot 2.2 and 2.3)
Ucky.Kit
@uckyk
got it,
thx!!!
Adrian Cole
@adriancole
np!
Marcin Grzejszczak
@marcingrzejszczak
with 1.3.x we had our own tracer implementation, in 2.x we've migrated to Brave and lots of bugs was fixed since then so absolutely we advise to migrate to the latest stable version
Veera93
@Veera93
Hi All,
I am relatively new to sleuth framework.. I have an usecase where traceId needs to be sent back in the response header. I was hoping to achieve the same using annotations or configuration. When I had searched they were achieved by adding filters (also found few old threads where they had mentioned that sending traceId in the response header has been removed due to potential security concern). In the latest implementation are there any configuration/annotations for the same or I would have to write a trace filter? Any help on this is appreciated
Adrian Cole
@adriancole
@Veera93 still the same
the tracing system does not mutate the response at the moment, so it can't write response headers
Saisurya Kattamuri
@saisuryakat
Hi
What's the difference between spring.sleuth.remote-fields vs spring.sleuth.baggage.remote-fields
In the sleuth properties appendix its given with baggage. whereas in the documentation without
https://cloud.spring.io/spring-cloud-sleuth/reference/html/appendix.html
image.png
Which of these is correct
the link you are pointing to is the version 3 docs
ah I see what you are saying
thanks. can you do a pull request to fix this?
Saisurya Kattamuri
@saisuryakat
Sure, I can
but I'm having trouble making baggage fields work
Is there any sample repo present for reference
I'm confused esp whether to just define the headers in application.yml and set the values in controller or to config brave(as shown in brave docs)
image.png
Adrian Cole
@adriancole
if you are doing what the properties can do, just use the properties
brave is used for other things even spring 2.5
sleuth has things built-in so easier to use properties than javaconfig for the same result
I'm changing example project for you hold a sec
Saisurya Kattamuri
@saisuryakat
Okay
Adrian Cole
@adriancole
Saisurya Kattamuri
@saisuryakat
Thanks
Adrian Cole
@adriancole
just tested so :thumbsup:
Saisurya Kattamuri
@saisuryakat
Adrian Cole
@adriancole
:bow:
Templeton Peck
@LtTempletonPeck
Are there any working examples of integrating a spring boot project with AWS X-Ray? A quick googling say no. Do I just import the deps (spring-cloud-starter-zipkin/io.zipkin.aws:zipkin-reporter-xray-udp) and override the ZipkinAutoConfiguration.REPORTER_BEAN_NAME with my own XRayUDPReporter bean? There doesn't seem to be any auto-config for the Zipkin X-Ray but I'm happy to be wrong.
Marcin Grzejszczak
@marcingrzejszczak
@devinsba don't you have such a sanple or am i mistaken?
Adrian Cole
@adriancole
@anuraaga fyi
@LtTempletonPeck so there's no auto-config I'm aware of (ex if so it would be in spring-cloud-aws and I think we'd have heard of it)
there's not much auto-config for xray anyway as the daemon is localhost, but if you wanted to have an auto-config I suspect an issue on spring-cloud-aws would make most sense as something similar to this exists in spring-cloud-gcp
1 reply
eyalringort
@eyalringort
Marcin Grzejszczak
@marcingrzejszczak
I've answered
Veera93
@Veera93
@adriancole Thanks for you reply. So using trace filter is the only way in which I can get the traceId in the response header then
Marcin Grzejszczak
@marcingrzejszczak
You can use a filter and retrieve tracing info before sending out the response
eyalringort
@eyalringort
@marcingrzejszczak I added a sample project for my SO question
Marcin Grzejszczak
@marcingrzejszczak
:+1: thanks
krraghavan
@krraghavan
My spring webflux based boot application seems to post traces to zipkin properly but neither the logs nor the controller methods are getting the trace and span ids. It’s a pretty basic configuration with sleuth starter and zipkin starter. The trace web filter is getting invoked correctly and recording the initial span. I realize this is pretty vague but am wondering if I’m missing something basic. What information should I provide here if this is not obvious.
Marcin Grzejszczak
@marcingrzejszczak
if you're using a Controller with WebFlux and you log out things with a logger then that should work
2 replies
Templeton Peck
@LtTempletonPeck

Hi I now have traces being sent to X-Ray but the names that display in the X-Ray console for traces are not being correctly set for Feign and Spring Cloud Stream Kafka. For Feign they are using the http method, here is an example the name is get but the annotations.http_path has the path:

                "subsegments": [
                    {
                        "id": "f945f28acb910a16",
                        "name": "get",
                        "start_time": 1591611761.254306,
                        "end_time": 1591611761.583249,
                        "http": {
                            "request": {
                                "method": "GET"
                            }
                        },
                        "annotations": {
                            "http_path": "/device/v1",
                            "operation": "get"
                        },
                        "namespace": "remote"
                    },
                    ...

For Spring Cloud Stream Kafka I was hoping it would resolve the destination of the channel but at the moment it is using the binder as the name and the channel is in the annotations. It doesnt seem to have resolved the destination.

                    {
                        "id": "aeab33b8a6b2c004",
                        "name": "kafka",
                        "start_time": 1591611763.963422,
                        "end_time": 1591611763.968351,
                        "annotations": {
                            "channel": "output"
                        },
                        "namespace": "remote"
                    }

Do you know if its possible to override the name? So far I've tried @SpanName and @PostMapping(name = "my-name" but neither work.