Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Joe Martinez
@JoePercipientAI
I'm on Linux
Ivan Topolnjak
@ivantopo
@gilshoshan17_twitter it looks like you have a $ symbol in your filters! if it is there by mistake please remove it or if you actually wanted to have a regex in the filter make sure that the pattern starts with regex: and then the expression
Mladen Subotić
@mladens
@JoePercipientAI i just ran an app with kanela in debug and it produced kanela-agent.2019-09-12 17-37-19.log in the project root dir
Joe Martinez
@JoePercipientAI
@mladens Thanks. I just verified that if I run my non-Spark test app with that configuration, I DO get the log file in the project root dir. But not for my Spark app
Enrico Benini
@ebenini-mdsol

Hi there,
I attach in here a simple project we crafted at work that shows how we get different output based on the appender type FileAppender and AsyncAppender into the logback.

If you take the project and do an sbt run changing the value of appender-ref (line 25) once with FILE and then with ASYNCFILE you can see that the log/app.log in one case just returns empty and the other it prints the span operationName.

We guess it can be due to some bugs into the kamon-logback or the kanela-agent.
Could you please have a look at the code and let us know?
Thank you so much

ps: I will also raise an issue on github for traceability ;)
Enrico Benini
@ebenini-mdsol
gmim
@gilshoshan17_twitter
@ivantopo hi Ivan. i dont have any $ sign in my filters. if my own custom thread pool is not one of the common threadpool configurator, the Name field will be null (it will never be set) and then the InstrumentNewExecutorServiceOnAkka.around method wil get null name, and from there the road to nullPointerException is short.
Thomas Ploch
@tPl0ch
Could it be that the Kamon Datadog reporter is sending traces and spans not as strings? In my logs I have hex number strings in dd.trace_id and dd.span_id, but the traces use large integers, so no correlation can be made with errenuous traces.
Thomas Ploch
@tPl0ch
I mean logs and traces cannot be correlated.
Thomas Ploch
@tPl0ch
Yup, so kamon-datadog converts the hex string to a BigInt while kamon-logback does not. Shouldn't these be aligned? Meaning the string representation of a trace or span ID should be consistent over all kamon instrumentations.
Thomas Ploch
@tPl0ch
I am thinking of adding a configuration to kamon-datadog to not convert to BigDecimal but just use hex number strings in order to align log correlation, @ivantopo any thoughts on this?
Hmm OK, so the trace endpoint in datadog uses 64-bit unsigned integers for IDs regarding the spec, so adding it to logback is probably the way to go. Is there a possibility to convert values in logback directly?
Sakis Karagiannis
@AlterEgo7

hello everyone, I am trying to use kamon-bundle along with a datadog exporter, on an akka-http app. However I’m getting the following error:

java.lang.NoClassDefFoundError: Could not initialize class scala.concurrent.Future$
    at akka.http.impl.util.StreamUtils$CaptureTerminationOp$.<init>(StreamUtils.scala:281)
    at akka.http.impl.util.StreamUtils$CaptureTerminationOp$.<clinit>(StreamUtils.scala)
    at akka.http.scaladsl.model.HttpEntity$.captureTermination(HttpEntity.scala:672)

Has anyone seen this before?

Thomas Ploch
@tPl0ch
Yes, me. Just recently. It happens when Kamon.init() is not the first expression that is evaluated when the application starts.
@AlterEgo7 in my case it was because our Main object was extending App which initializes Futures already. You can just create a trait KamonInit and extend this as the first member instead of App.
Sakis Karagiannis
@AlterEgo7
@tPl0ch thanks a lot, I saw it in the documentation now as well. It might make sense for that exact phrase to be in bold in the site documentation as the compiler error is completely unrelated and confusing
crow-fff
@crow-fff

It happens when Kamon.init() is not the first expression that is evaluated when the application starts.

And because of that Kamon.init(customConfig) doesn't work too.

Ivan Topolnjak
@ivantopo
@tPl0ch as far as I understand the fact that trace IDs are converted to numbers is because of the format received by the DD agent
if you look here: https://docs.datadoghq.com/api/?lang=python#send-traces it doesn't seem like we could send the HEX string on those fields
crow-fff
@crow-fff
@ivantopo and others, can you look at this PR: kamon-io/kamon-akka-http#64
We are looking to add Kamon to our project.
But it turned out that another team uses an HTTP client which doesn't support chunked entities for some reason, so the instrumentation breaks compatibility for us.
Additionally it will avoid an unnecessary flow materialization for strict entities, so it's important for performance too.
Cheng Wei
@weicheng113
@ivantopo have you looked the sample metric output I sent you?
Jan Ypma
@jypma

We send our metrics over UDP/statsd to a remote datadog agent (don't ask). That remote agent is resolved over DNS only once. Since the whole thing runs in kubernetes, the target agent may change IP addresses. Currently kamon-datadog doesn't pick that up, since it only resolves the target hostname once (see https://github.com/kamon-io/kamon-datadog/blob/master/src/main/scala/kamon/datadog/DatadogAgentReporter.scala#L188)

Would be it OK to create a new INetSocketAddress every few packets, to force a fresh DNS lookup that way?

danischroeter
@danischroeter
@jypma would make sense to adapt the code to resolve for every snapshot (once per minute...) see override def reportPeriodSnapshot(snapshot: PeriodSnapshot): Unit = { but better not for every few packets since that could seriously hurt performance...
Dinesh Narayanan
@ndchandar

Hi,
I am looking for suggestions on how to do this in Kamon + Akka Http

I need to generate correlationId for every input request and I have multiple routes. Doing Kamon.runWithContextEntry(key, <random-uuid>) in every route introduces more boilerplate. My initial thinking was to define a new directive but it seems the context is getting lost as there is no future in the directive. Any suggestions on how do this in a nicer way ?

Ivan Topolnjak
@ivantopo
Good morning folks!
Ivan Topolnjak
@ivantopo
@crow-fff thanks a lot for that PR!
Mladen Subotić
@mladens
Hey @ndchandar, thats exactly what kamon-akka-http module is doing, did you try it out?
Ivan Topolnjak
@ivantopo
@anubhav21sharma_gitlab hey there! Just wanted to mention that the issue you were having with using the Java API for the Akka HTTP client has now an open PR with a fix: kamon-io/kamon-akka-http#66
Ivan Topolnjak
@ivantopo
@jypma I'm thinking exactly the same as @danischroeter said: resolving a new one on each period snapshot should be enough
Ivan Topolnjak
@ivantopo
besides kamon-io/kamon-akka-http#66 mentioned above, we have kamon-io/kamon-akka-http#62 going around and its ready to be merged
I'll be merging those two and releasing at some point during the weekend unless any blockers appear
Anubhav Sharma
@anubhav21sharma_gitlab
@ivantopo Great. Thank you for the quick fix Ivan.
Sergey Morgunov
@ihostage
:+1:
Anubhav Sharma
@anubhav21sharma_gitlab
I have one more question. Is there a way to log details of all spans on the stdout as well apart from sending them to the jaeger/zipkin collector?
Ivan Topolnjak
@ivantopo
nope
but if you wanted to create a reporter that just printlns everything it would be very simple
Dinesh Narayanan
@ndchandar

Hey @ndchandar, thats exactly what kamon-akka-http module is doing, did you try it out?

@mladens Today Kamon Akka Http only provides operationName(in TracingDirectives). What i need is to create new entry for every new route request. I create sample gist to describe the scenario a bit https://gist.github.com/ndchandar/15a81f904fbf21b2149346c6d120fab7

As you can see there, the code is not very dry due to Kamon.runWithContextEntry(CorrelationIdCodec.correlationKey, IdGenerator.newId) in multiple places.

I wanted to check a) if there is a better way to do this b) if i happen to need that context entry in my own internal custom directive (as in the second route someCrudApi2) how to do it in more succinct way without duplicating

Dinesh Narayanan
@ndchandar
Hello,
I need some help on this one. I have correlationId as context entry. But for some reason it doesn't get propagated in our log files (they are coming as null). I have attached sample gist https://gist.github.com/ndchandar/cdcbbbd73a786f487d066417dd5a1913.
I can confirm that i can see the correlationId set correctly in the context
What do you think i am missing ?
Ivan Topolnjak
@ivantopo
hey @ndchandar, may I ask whether you have an explicit need to use a dedicated entry for the correlationId.. if not, I would suggest just using the traceID as the correlation ID
Dinesh Narayanan
@ndchandar
The standard we have in our group is that if we have correlationIdalready set in incoming header, we need to use that as our loggingIdwhen logging. Many micro-services call our services
Dinesh Narayanan
@ndchandar
Update: If I keep correlationId in context tags (instead of context entry) logback logs ok.
Ivan Topolnjak
@ivantopo
hey @ndchandar, I was looking into your gist and first, clever trick! mixing the entry readers and a context tag :D! I would just make one suggestion there: stick to only one way of transporting the correlation ID, so that it is either a Context Tag OR a Context Entry. If you need to pass it around several microservices that already have code for that I would suggest that you use the Context tags since they will transparently propagate through all Kamon-enabled services.. if you went to use a Context entry then you would need to share the entry reader/writer implementation across all of them and that can become cumbersome
then you had just one issue left: generating the correlationId when its not present.. here, if you are using Akka HTTP I would suggest just creating one directive on the top level and wrapping the entire routing tree with it
if you want to stick to the entry reader/writer hack, just change this line: https://gist.github.com/ndchandar/cdcbbbd73a786f487d066417dd5a1913#file-corrrelationidcodec-scala-L20 so that instead of adding a Context entry it will read whether the correlationId tag is present and if not, generate a new one
(I checked the source behing HTTP propagation and the Context passed into that function will have all tags that have been read so far! and, there is no hard constrain that an entry reader must add anything to the context or that when it adds it has to be an entry, you can just use it to write the Context tag if necessary)
Srepfler Srdan
@schrepfler
hi guys, the kamon zipkin module hasn't been published for 2.0.1
io.kamon#kamon-zipkin_2.13;2.0.1: not found
Ivan Topolnjak
@ivantopo
hey @schrepfler, there was no need to upgrade the module, it will work just fine!
we are only alining around the major version number