Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Samuel
@garraspin
Hi all! Anyone busy with upgrading kamon-http4s to kamon2?
Ivan Topolnjak
@ivantopo
@tPl0ch for now yeah, that's the only way to go! although, PRs would be welcome! also, as far as I remember we didn't instrument it at the time because there wasn't any people asking for it, I don't remember there being any technical impediments to make it happen
Thomas Ploch
@tPl0ch
@ivantopo If you can point me to the relevant parts of the code I could try making a PR happen :)
Ivan Topolnjak
@ivantopo
awesome! the interesting stuff for context propagation on remote is in two places:
I would have thought that the AkkaPduProtobufCodec instrumentation would have been enough, unless the Artery implementation uses a different way of packaging the messages before being sent
the first thing would be figuring out what's the difference between the sequence of events for sending a message with the netty transport vs artery and then probably just reuse some of the logic we already have there
it might be that either the code path just goes on a completely different direction, or that it just detours in some parts and the Context is being lost
it will be hard, but it will be fun!
@pnerg I'm sorry for the docs, Peter.. still didn't get around to update them :(
updated a lot, but not all
Ivan Topolnjak
@ivantopo
there is the new concept of "propagation" in Kamon, and the old Codecs.ForEntry is what now would be a Propagation.EntryReader[Medium] or Propagation.EntryWriter[Medium] and the mediums are either HTTP header reader/writer or ByteStream reader/writer
probably a good starting point is to take a look at the Span propagation: https://github.com/kamon-io/Kamon/blob/master/kamon-core/src/main/scala/kamon/trace/SpanPropagation.scala
there are 3 implementations there:
  • B3 and B3Single which use HTTP headers
  • Colfer which uses ByteStreams
also, just to mention it, we found several cases in which people were creating their own codecs to propagate a few key/value pairs and the whole point of the new Context tags is to enable that use case without users having to code a custom codec, maybe just migrating to Context tags would work for you!
Ivan Topolnjak
@ivantopo
regarding the OperationName generator.. there were a couple changes there: first, now the Akka HTTP instrumentation also targets the path matchers and is able to automatically create a decent (variables-free) name for operations right out of the box.. if that is still not good enough, you can create an HttpOperationNameGenerator, it was moved to the kamon-instrumentation-common module because now that is a base functionality that we use for all supported HTTP servers
although, now that I look into it, the Akka HTTP instrumentation is only using it on the client side :'(
have you tried using the new instrumentation and see if the default names are good enough or you will definitely need the ONG?
@garraspin I heard earlier today that @mladens got it working, hopefully he will share that soon :)
Peter Nerg
@pnerg

@ivantopo thx, will peek at the example. The reason for creating a custom codec is to generate a special header in case it is missing. This is part of a utility library shared among many applications so the reason for the codec was to hide it for the apps.
this gave me a single place to mutate the context else one must mutate and set the new mutated context as current
So yeah in practice my need is probably very close the span propagation.

I'll take a look at the default name produced by the built in name generator but there was (at the time) a reason why we implemented our own so I guess we'll still need the possibility to customise the name

Ivan Topolnjak
@ivantopo
ok!
btw
depending on how complicated are the rules, you might be able to get the name you want just by using configuration
Thomas Ploch
@tPl0ch
@ivantopo Might it be related to us using a custom Avro Serializer when sending messages? Would we need to add the traces and spans in our serializers?
Peter Nerg
@pnerg

I'll peek at that too, though I fear config might be complicated as it's about slicing the URL according to a json api spec

/api/<resource>/<id> -> <resource>
/api/<resource>/<id>/<relationship> -> <resource>-<relationship>
/api/<servicefunction> -> <servicefunction>
The code in Scala is very simple so I would love to just migrate it as opposed to try to config it using complicated rules, it it even would be possible

Ivan Topolnjak
@ivantopo
ok ok
SahilAggarwalG
@SahilAggarwalG
@dpsoft -Dkanela.modules.executor-service-capture-on-submit.enabled=true , abov e falg is experimental as seen from refrence.conf of the jar , is it Ok to us it in production
Diego Parra
@dpsoft
@SahilAggarwalG currently we have some services running in production for several months without issues with the flag activated. We need adjust some things but I think is safe in the majority of the uses cases.
danischroeter
@danischroeter

Is the akka remote Kamon module supporting Artery? I don't see traces being joined on other nodes.
we didn't get around that just yet :/

holy c#$%*
I was looking at issues why 2.0 does not work anymore :( Very frustrating.
Would be nice to document what is not supposed to work...
@ivantopo How can this issue be tracked? - @tPl0ch said he would do a PR - I can open an issue if that helps. Maybe I can also lend a hand but the migration was already quiet costly...
I guess I need to stop the migration to 2.0 for now...
Btw: artery becomes the default in the upcoming akka 2.6...

Samuel
@garraspin
@mladens when do you think you will have kamon-http4s ready?
Mladen Subotić
@mladens
expecting to make a PR sometime today/tomorrow
Samuel
@garraspin
I started a PR but im stuck in fixing HttpMetricsSpec, it looks like there was a trait MetricInspection with a bunch of type classes that are gone but I cant find a replacement
Mladen Subotić
@mladens
it moved to InstrumentInspection.Syntax
Ivan Topolnjak
@ivantopo
@danischroeter @tPl0ch regarding Artery support: kamon-io/kamon-akka#58
please subscribe to that one
danischroeter
@danischroeter
:thumbsup:
Joe Martinez
@JoePercipientAI
@mladens I assume you meant kanela.debug-mode = true. I tried it set to both true and false, and in neither case do I see anything about Kamon in the logs, other than the banner.
Mladen Subotić
@mladens
yup, sorry my bad, debug-mode = true, also forgot to mention to drop the log level kanela.log-level = "DEBUG"
Joe Martinez
@JoePercipientAI
@mladens I just tried that, and it didn't make a difference. I still get the same output in the logs, and there are no DEBUG log entries at all.
Also can you confirm... Does the "Kanela" config section go under "Kamon", or is it its own root level?
Joe Martinez
@JoePercipientAI
Ok, that's what I did.
Mladen Subotić
@mladens
try logging the resulting conf, seems like its getting lost in the merge
gmim
@gilshoshan17_twitter

hi. I'm getting java.lang.NullPointerException when init Kamon 2.0 . I'm using my one thread pool (extends ExecutorServiceFactory) the issue is in CaptureActorSystemNameOnExecutorConfigurator object. it looks like its cant support any custom made thread pool. is this the case?
code

[ERROR] [09/12/2019 18:07:13.430] [myApp1-actor-system-akka.actor.default-dispatcher-5] [akka.dispatch.Dispatcher] null
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at kamon.util.Filter$Glob.accept(Filter.scala:197)
at kamon.util.Filter$IncludeExclude.kamon$util$Filter$IncludeExclude

KaTeX parse error: Can't use function '$' in math mode at position 1: $̲anonfun$2(Filte…: $anonfun$2(Filter.scala:140)
    at kamon.util.Filter$IncludeExclude$lambda
includes$1.apply(Filter.scala:140)
at kamon.util.Filter$IncludeExclude$lambda
KaTeX parse error: Can't use function '$' in math mode at position 9: includes$̲1.apply(Filter.…: includes$1.apply(Filter.scala:140)
    at scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
    at scala.collection.immutable.List.exists(List.scala:84)
    at kamon.util.Filter$IncludeExclude.includes(Filter.scala:140)
    at kamon.util.Filter$IncludeExclude.accept(Filter.scala:137)
    at kamon.instrumentation.akka.instrumentations.InstrumentNewExecutorServiceOnAkka25$.around(DispatcherInstrumentation.scala:155)
    at akka.dispatch.CachedThreadPoolExecutorServiceFactory.createExecutorService(CachedThreadPoolExecutorConfigurator.scala)
    at akka.dispatch.Dispatcher$LazyExecutorServiceDelegate.executor$lzycompute(Dispatcher.scala:43)
    at akka.dispatch.Dispatcher$LazyExecutorServiceDelegate.executor(Dispatcher.scala:43)
    at akka.dispatch.ExecutorServiceDelegate$class.execute(ThreadPoolBuilder.scala:217)
    at akka.dispatch.Dispatcher$LazyExecutorServiceDelegate.execute(Dispatcher.scala:42)
    at akka.dispatch.Dispatcher.executeTask(Dispatcher.scala:80)
    at akka.dispatch.MessageDispatcher.unbatchedExecute(AbstractDispatcher.scala:154)
    at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:122)
    at akka.dispatch.MessageDispatcher.execute(AbstractDispatcher.scala:88)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
    at scala.concurrent.Promise$class.complete(Promise.scala:55)
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
    at akka.http.impl.engine.client.PoolInterfaceActor
anonfun$receive$1.applyOrElse(PoolInterfaceActor.scala:129)
at akka.actor.Actor$class.aroundReceive(Actor.scala:539)
at akka.http.impl.engine.client.PoolInterfaceActor.akka$stream$actor$ActorSubscriber
KaTeX parse error: Can't use function '$' in math mode at position 6: super$̲aroundReceive(P…: super$aroundReceive(PoolInterfaceActor.scala:68)
    at akka.stream.actor.ActorSubscriber$class.aroundReceive(ActorSubscriber.scala:191)
    at akka.http.impl.engine.client.PoolInterfaceActor.akka$stream$actor$ActorPublisher
super$aroundReceive(PoolInterfaceActor.scala:68)
at akka.stream.actor.ActorPublisher$class.aroundReceive(ActorPublisher.scala:350)
at akka.http.impl.engine.client.PoolInterfaceActor.aroundReceive(PoolInterfaceActor.scala:68)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
at akka.actor.ActorCell.invoke(ActorCell.scala:581)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:653)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
code

Joe Martinez
@JoePercipientAI
@mladens Here is the Kanela section of the logged config:
  "kanela": {
    "circuit-breaker": {
      "enabled": false,
      "free-memory-threshold": 20,
      "gc-process-cpu-threshold": 10
    },
    "class-dumper": {
      "create-jar": true,
      "dir": "\/home\/joe\/kanela-agent\/dump",
      "enabled": false,
      "jar-name": "instrumented-classes"
    },
    "class-replacer": {
      "replace": [
        "kamon.status.Status$Instrumentation$=>kanela.agent.util.KanelaInformationProvider"
      ]
    },
    "debug-mode": true,
    "gc-listener": {
      "log-after-gc-run": false
    },
    "instrumentation-registry": {
      "enabled": true
    },
    "log-level": "DEBUG",
    "modules": {
      "executor-service": {
        "within": [
          "^slick.*"
        ]
      },
      "jdbc": {
        "description": "Provides instrumentation for JDBC statements, Slick AsyncExecutor and the Hikari connection pool",
        "instrumentations": [
          "kamon.instrumentation.jdbc.StatementInstrumentation",
          "kamon.instrumentation.jdbc.HikariInstrumentation"
        ],
        "name": "JDBC Instrumentation",
        "within": [
          "^org.h2..*",
          "^org.sqlite..*",
          "^oracle.jdbc..*",
          "^com.amazon.redshift.jdbc42..*",
          "^com.amazon.redshift.core.jdbc42..*",
          "^com.mysql.jdbc..*",
          "^com.mysql.cj.jdbc..*",
          "^org.h2.Driver",
          "^org.h2.jdbc..*",
          "^net.sf.log4jdbc..*",
          "^org.mariadb.jdbc..*",
          "^org.postgresql.jdbc..*",
          "^com.microsoft.sqlserver.jdbc..*",
          "^com.zaxxer.hikari.pool.PoolBase",
          "^com.zaxxer.hikari.pool.PoolEntry",
          "^com.zaxxer.hikari.pool.HikariPool",
          "^com.zaxxer.hikari.pool.ProxyConnection",
          "^com.zaxxer.hikari.pool.HikariProxyStatement",
          "^com.zaxxer.hikari.pool.HikariProxyPreparedStatement",
          "^com.zaxxer.hikari.pool.HikariProxyCallableStatement"
        ]
      }
    },
    "show-banner": true
  }
Mladen Subotić
@mladens
Are you seeing any logfiles being generated by kanela? There should be some if debug mode is enabled.
Joe Martinez
@JoePercipientAI
I'm just looking at the console. Where would I find log files?
Mladen Subotić
@mladens
probably jvm work dir
Joe Martinez
@JoePercipientAI
@mladens Sorry, do you know how I can find out where that dir is on my system? Or what the log files would be named?