Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 00:47

    spring-buildmaster on gh-pages

    Sync docs from 2.1.x to gh-pages (compare)

  • Dec 05 14:49
    marcingrzejszczak closed #1502
  • Dec 05 14:49
    marcingrzejszczak commented #1502
  • Dec 05 14:13
    spring-issuemaster unlabeled #1502
  • Dec 05 14:13
    spring-issuemaster labeled #1502
  • Dec 05 14:09
    IrishkA13 commented #1502
  • Dec 05 10:29
    marcingrzejszczak labeled #1502
  • Dec 05 10:29
    marcingrzejszczak unlabeled #1502
  • Dec 05 10:29
    marcingrzejszczak commented #1502
  • Dec 05 10:28
    spring-issuemaster labeled #1502
  • Dec 05 10:25
    IrishkA13 opened #1502
  • Dec 05 00:42

    spring-buildmaster on gh-pages

    Sync docs from 2.1.x to gh-pages (compare)

  • Dec 04 12:27
    AntonKuzValid commented #1501
  • Dec 04 11:28
    spring-issuemaster labeled #1501
  • Dec 04 11:26
    AntonKuzValid opened #1501
  • Dec 04 08:30
    mauraborean edited #1500
  • Dec 04 06:58
    spring-issuemaster labeled #1500
  • Dec 04 06:56
    mauraborean opened #1500
  • Dec 04 05:40
    CircleCI success: spring-cloud build (#5658) in https://github.com/spring-cloud/spring-cloud-sleuth
    • Bumping versions
      (795bb007e5f1d1a0747ebc92d62806e4d4598ee1 by buildmaster)
  • Dec 04 05:30

    spring-buildmaster on master

    Bumping versions (compare)

Dan Cohen-Smith
@dancohensmith
i am contact mappign on the ReceiverRecord and then creating a custom mono with a core subscriber that drives the span
but i cant seem to get the logs right and i am not sure what i am doing wrong.
i can make the logs work correctly if i just use the subscriberContext() methods and then hold onto a static reference to the span and close it in the onTerminate callback in reactor but obviously thats totally wrong.
Dan Cohen-Smith
@dancohensmith
i feel i am very close but just missing something simple.
Marcin Grzejszczak
@marcingrzejszczak
@dancohensmith if you don't do onEach operator instrumentation you might not do it. It may be possible that you won't be able to do it at all. With onLast operator we assume that you might not be able to have logs set up properly
Dan Cohen-Smith
@dancohensmith
it is using on each
i haven't changed that
@marcingrzejszczak Implemented my own MonoOperator
and built my own CoreSubscriber
in there i started the span etc
Marcin Grzejszczak
@marcingrzejszczak
I guess you should talk to either @smaldini or @bsideup about these things then since they are the real experts
Dan Cohen-Smith
@dancohensmith
okay i'll reach out to stephane i met him at hsbc many years ago now
theres nothing i need to do to hook a span to the slf4j mdc or anything?
i just use the KafkaTracing.nextSpan()
and then start it.
Marcin Grzejszczak
@marcingrzejszczak
if you're using our strategy to wrap on Each operator you should get the MDC context set up
that's what we do, we essentially retireve from span from context and do maybeScope that puts stuff to MDC
and then clears it
Dan Cohen-Smith
@dancohensmith
yeah i am mutating the context
but its never being called in my core subscriber
very annoting
Marcin Grzejszczak
@marcingrzejszczak
:grimacing:
Dan Cohen-Smith
@dancohensmith
i amd basically doing reaceiver.receiver().concatMap(record-> new TracingMono(Mono.just(record), kafkaTracing).doOnNext(....)).subscribe();
i can see the span being created and sent to jaeger
but the logs are wrong
its the traceId of the trace being created by the scheduler that kafka is using
in the stack trace i can see the SpanPassingScopeOperator thingy you use
my theory was if i can update the context then all the rest will work
i didnt want to duplciate what is already in the scopePassingOperator thing
Dan Cohen-Smith
@dancohensmith
my default you use the onEach operator don't you?
Marcin Grzejszczak
@marcingrzejszczak
yes
Dan Cohen-Smith
@dancohensmith
i thought so.
Dan Cohen-Smith
@dancohensmith
managed to get it working now sort of
the problem now is if i subscribe on a different thread it changes the span briefly then its back to normal
Dan Cohen-Smith
@dancohensmith
@bsideup
Marcin Grzejszczak
@marcingrzejszczak
@/all Hoxton.RELEASE with Spring Cloud Sleuth 2.2.0.RELEASE is out (https://spring.io/blog/2019/11/28/spring-cloud-hoxton-released) please check it out cause we've introduced A LOT of new features :) Also we're more than happy to receive your feedback from the documentation changes.
Ziemowit
@Ziemowit

Hmm for sure after upgrade when try to download the swagger API documentation via gateway I get:

org.springframework.core.io.buffer.DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144
    at org.springframework.core.io.buffer.LimitedDataBufferList.raiseLimitException(LimitedDataBufferList.java:101)
    Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
Error has been observed at the following site(s):
    |_ checkpoint ⇢ Body from  [DefaultClientResponse]
    |_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
    |_ checkpoint ⇢ org.springframework.cloud.sleuth.instrument.web.TraceWebFilter [DefaultWebFilterChain]
    |_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
    |_ checkpoint ⇢ HTTP GET "/foo/bar/v2/api-docs" [ExceptionHandlingWebHandler]

But probably it is not guilty of Sleuth. Will try to debug.

Marcin Grzejszczak
@marcingrzejszczak
please do
Dan Cohen-Smith
@dancohensmith
@marcingrzejszczak found a bug with reactive tracing.
if you publish or subscribe on another scheduler and mono.error then span isn't failed.
Marcin Grzejszczak
@marcingrzejszczak
please report an issue with a sample that reproduces the problem
of course I have no idea even which version of Sleuth you're using so maybe the bug was already fixed
Dan Cohen-Smith
@dancohensmith
okay sure
my personal laptop is ruined at the moment which is slowing me down
i also implemented some tracing for reactive kafka that i want to share but same issue
abkura
@abkura
getting the below warning when shutting down server - using kinesissender
[WARNING]
java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check (Asserts.java:34)
at org.apache.http.pool.AbstractConnPool.lease (AbstractConnPool.java:196)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection (PoolingHttpClientConnectionManager.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke (ClientConnectionManagerFactory.java:76)
at com.amazonaws.http.conn.$Proxy152.requestConnection (Unknown Source)
at org.apache.http.impl.execchain.MainClientExec.execute (MainClientExec.java:176)
at org.apache.http.impl.execchain.ProtocolExec.execute (ProtocolExec.java:186)
at org.apache.http.impl.client.InternalHttpClient.doExecute (InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute (CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute (CloseableHttpClient.java:56)
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute (SdkHttpClient.java:72)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest (AmazonHttpClient.java:1297)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper (AmazonHttpClient.java:1113)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute (AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer (AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute (AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500 (AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute (AmazonHttpClient.java:668)
at com.amazonaws.http.AmazonHttpClient.execute (AmazonHttpClient.java:532)
at com.amazonaws.http.AmazonHttpClient.execute (AmazonHttpClient.java:512)
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke (AmazonKinesisClient.java:2809)
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke (AmazonKinesisClient.java:2776)
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke (AmazonKinesisClient.java:2765)
at com.amazonaws.services.kinesis.AmazonKinesisClient.executePutRecord (AmazonKinesisClient.java:2013)
at com.amazonaws.services.kinesis.AmazonKinesisClient.putRecord (AmazonKinesisClient.java:1984)
at zipkin2.reporter.kinesis.KinesisSender$KinesisCall.doExecute (KinesisSender.java:231)
at zipkin2.reporter.kinesis.KinesisSender$KinesisCall.doExecute (KinesisSender.java:222)
at zipkin2.Call$Base.execute (Call.java:380)
at zipkin2.reporter.AsyncReporter$BoundedAsyncReporter.flush (AsyncReporter.java:285)
at zipkin2.reporter.AsyncReporter$Flusher.run (AsyncReporter.java:354)
at java.lang.Thread.run (Thread.java:748)
Marcin Grzejszczak
@marcingrzejszczak
You should report it to the zipkin team
Since it's related to the zipkin reporter
abkura
@abkura
sure