Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Ivan Topolnjak
@ivantopo
and the cross-process Context propagation that you can find here: https://github.com/kamon-io/Kamon/blob/master/kamon-core/src/main/scala/kamon/ContextPropagation.scala
Wojtek Pituła
@Krever
great, thank you!
Ivan Topolnjak
@ivantopo
it all boils down to attaching a Context to the right "event" in the system and then running Kamon.runWithContext(...) in the right places :)
Diego Parra
@dpsoft
@Krever you can take a look: https://github.com/bogdanromanx/kamon-monix
Wojtek Pituła
@Krever
@dpsoft thanks, I have seen it, didnt have time to look yet :)
Joe Martinez
@JoePercipientAI
@dpsoft I posted my simple test project to Github. Thanks for any help you can provide. https://github.com/jmartine2/joekamontest
Thomas Ploch
@tPl0ch
Hey, so I have a question regarding datadog tagging with the DataDog Agent Reporter. In https://github.com/kamon-io/kamon-datadog/blob/master/src/main/resources/reference.conf it has a section environment tags, but how can I add other tags that are specific to our organisational set up? I havn't found anything in the documentation.
Diego Parra
@dpsoft
@JoePercipientAI take a look: jmartine2/joekamontest#1
Joe Martinez
@JoePercipientAI
@dpsoft Thanks, I'll give that a try!
Joe Martinez
@JoePercipientAI
@dpsoft Thank you so much! That worked beautifully! I had read the following on the Kamon configuration docs: All Kamon modules that need configuration ship with a reference.conf file where default settings are contained and you are free to override any of those values by supplying your owns in your application.conf file.. So it seemed to me like I could just override individual values in application.conf. I didn't realize that the Maven assembly plugin would do otherwise.
Joe Martinez
@JoePercipientAI
Does anyone have experience getting the Kanela agent to work with Apache Spark executor? I am passing --conf "spark.executor.extraJavaOptions=-javaagent:kanela-agent-1.0.1.jar" to the spark-submit command, but the Kanela banner does not print, nor does my failed statement processor get called when there is a SQL error.
Joe Martinez
@JoePercipientAI
Update: I got the Kanela banner to display to the console in my Spark app. (I had forgotten to call Kamon.init). However, my failed statement processor is still not getting called when there is a SQL error. I'm thinking that maybe somehow the Kamon/Kanela initialization is not taking effect for the Spark executor (as that is where the JDBC calls are made). Any ideas?
Cheng Wei
@weicheng113
image.png
@ivantopo status page above. I disabled all the module except for Akka Instrumentation at the moment for testing. For the logic, what if a actor path is neither in "includes" nor in "excludes"? Will the metric be included? I am using a custom reporter to play with metric at the moment.
image.png
Cheng Wei
@weicheng113
@ivantopo send you a private message of metric log sample.
Mladen Subotić
@mladens
hey @JoePercipientAI , try setting kanela.debug-mode = false and check logs to see whether Statements are getting picked up by kanela, could be that its getting loaded before Kamon.init()
Thomas Ploch
@tPl0ch
Is the akka remote Kamon module supporting Artery? I don't see traces being joined on other nodes.
Ivan Topolnjak
@ivantopo
we didn't get around that just yet :/
Thomas Ploch
@tPl0ch
@ivantopo so the only option would be switching back to netty for remoting until Artery is supported, correct?
Peter Nerg
@pnerg

You'll have to excuse if my questions have been asked before but I'm not a big fan of gitter as media for Q&A due to the lack of structure and searchability.
Anyways I'm looking at moving to from kamon 1.x -> 2.x but face a few issues.
Have a few customisations I'm trying to lift.

First off kamon.context.Codecs has disappeared from kamon-core.
So how does one now create a custom codec?
The documentation still points to the trait ForEntry but it seems to have disappeared.

I also have a custom implementation kamon.akka.http.AkkaHttp.OperationNameGenerator but the trait seems to be gone as well.

No hint in the migration guide either

Samuel
@garraspin
Hi all! Anyone busy with upgrading kamon-http4s to kamon2?
Ivan Topolnjak
@ivantopo
@tPl0ch for now yeah, that's the only way to go! although, PRs would be welcome! also, as far as I remember we didn't instrument it at the time because there wasn't any people asking for it, I don't remember there being any technical impediments to make it happen
Thomas Ploch
@tPl0ch
@ivantopo If you can point me to the relevant parts of the code I could try making a PR happen :)
Ivan Topolnjak
@ivantopo
awesome! the interesting stuff for context propagation on remote is in two places:
I would have thought that the AkkaPduProtobufCodec instrumentation would have been enough, unless the Artery implementation uses a different way of packaging the messages before being sent
the first thing would be figuring out what's the difference between the sequence of events for sending a message with the netty transport vs artery and then probably just reuse some of the logic we already have there
it might be that either the code path just goes on a completely different direction, or that it just detours in some parts and the Context is being lost
it will be hard, but it will be fun!
@pnerg I'm sorry for the docs, Peter.. still didn't get around to update them :(
updated a lot, but not all
Ivan Topolnjak
@ivantopo
there is the new concept of "propagation" in Kamon, and the old Codecs.ForEntry is what now would be a Propagation.EntryReader[Medium] or Propagation.EntryWriter[Medium] and the mediums are either HTTP header reader/writer or ByteStream reader/writer
probably a good starting point is to take a look at the Span propagation: https://github.com/kamon-io/Kamon/blob/master/kamon-core/src/main/scala/kamon/trace/SpanPropagation.scala
there are 3 implementations there:
  • B3 and B3Single which use HTTP headers
  • Colfer which uses ByteStreams
also, just to mention it, we found several cases in which people were creating their own codecs to propagate a few key/value pairs and the whole point of the new Context tags is to enable that use case without users having to code a custom codec, maybe just migrating to Context tags would work for you!
Ivan Topolnjak
@ivantopo
regarding the OperationName generator.. there were a couple changes there: first, now the Akka HTTP instrumentation also targets the path matchers and is able to automatically create a decent (variables-free) name for operations right out of the box.. if that is still not good enough, you can create an HttpOperationNameGenerator, it was moved to the kamon-instrumentation-common module because now that is a base functionality that we use for all supported HTTP servers
although, now that I look into it, the Akka HTTP instrumentation is only using it on the client side :'(
have you tried using the new instrumentation and see if the default names are good enough or you will definitely need the ONG?
@garraspin I heard earlier today that @mladens got it working, hopefully he will share that soon :)
Peter Nerg
@pnerg

@ivantopo thx, will peek at the example. The reason for creating a custom codec is to generate a special header in case it is missing. This is part of a utility library shared among many applications so the reason for the codec was to hide it for the apps.
this gave me a single place to mutate the context else one must mutate and set the new mutated context as current
So yeah in practice my need is probably very close the span propagation.

I'll take a look at the default name produced by the built in name generator but there was (at the time) a reason why we implemented our own so I guess we'll still need the possibility to customise the name

Ivan Topolnjak
@ivantopo
ok!
btw
depending on how complicated are the rules, you might be able to get the name you want just by using configuration
Thomas Ploch
@tPl0ch
@ivantopo Might it be related to us using a custom Avro Serializer when sending messages? Would we need to add the traces and spans in our serializers?
Peter Nerg
@pnerg

I'll peek at that too, though I fear config might be complicated as it's about slicing the URL according to a json api spec

/api/<resource>/<id> -> <resource>
/api/<resource>/<id>/<relationship> -> <resource>-<relationship>
/api/<servicefunction> -> <servicefunction>
The code in Scala is very simple so I would love to just migrate it as opposed to try to config it using complicated rules, it it even would be possible

Ivan Topolnjak
@ivantopo
ok ok
SahilAggarwalG
@SahilAggarwalG
@dpsoft -Dkanela.modules.executor-service-capture-on-submit.enabled=true , abov e falg is experimental as seen from refrence.conf of the jar , is it Ok to us it in production
Diego Parra
@dpsoft
@SahilAggarwalG currently we have some services running in production for several months without issues with the flag activated. We need adjust some things but I think is safe in the majority of the uses cases.