Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Sep 15 12:49
    btomala commented #1002
  • Aug 31 12:41
    imRentable commented #1049
  • Aug 28 20:32
    haricoding commented #677
  • Aug 27 11:37
    ivantopo commented #766
  • Aug 25 08:43
    alexandru commented #766
  • Aug 25 08:37
    alexandru commented #766
  • Aug 24 12:10
    ivantopo opened #1055
  • Aug 24 09:54

    ivantopo on master

    update readme badges (compare)

  • Aug 24 09:35

    ivantopo on master

    update readme badges (compare)

  • Aug 18 14:02
    jatcwang commented #1021
  • Aug 18 12:49
    neboduus commented #1021
  • Aug 18 12:05
    jatcwang commented #1021
  • Aug 18 12:05
    jatcwang commented #1021
  • Aug 18 12:05
    jatcwang commented #1021
  • Aug 18 08:33
    neboduus commented #1021
  • Aug 17 11:09
    jatcwang commented #1021
  • Aug 17 10:49
    neboduus commented #1021
  • Aug 17 10:48
    neboduus commented #1021
  • Aug 16 07:11
    sgabalda commented #566
  • Jul 23 08:26

    SimunKaracic on v2.2.3


Ivan Topolnjak
:joy: thanks!
Arsene Tochemey Gandote
@ihostage quick one is your PR able to help instrument a lagom base application? We had some issue when doing instrumentation in lagom using kamon.
Sergey Morgunov
@Tochemey It's just the first step — implementation of Lagom Circuit Breakers Metrics SPI.
Still, it not fixed a problem with using Kamon in Lagom DEV-Mode. I will try to fix that in the future when I find time for that :smile:
Arsene Tochemey Gandote
@ihostage :smiley:
Ivan Topolnjak
the issue of lagom in dev mode is way more complicated because in that case there are (potentially) several different services running on the same JVM
together with SBT and all its stuff
currently Kanela applies all instrumentation on the entire JVM and yeah, we have some filters to prevent touching some classloaders
but that is not enough to account for the fact that some of the services might have Kamon on their classpaths and some others not
plus, Kamon itself doesn't allow for several services on the same JVM
in fact, there has never been a need for it, only for Lagom in dev mode
Sergey Morgunov
Yep :smile:
Hi @ivantopo, for the SQS queue, I’ve opted to manually create a span and pass within the objects that I pass down stream and do something as follows
private val deserializeAndProcessStage = Flow[SpanContext[SqsQueueMessage]]
    .throttle(throttleTotal, 1.second, throttleTotal, ThrottleMode.shaping)
    .mapAsync(parallelism) {
      case SpanContext(span, SqsQueueMessage(q, message)) =>
        Kamon.runWithSpan(span) {
          val parsedMsg = ArchonPayload.parse(message)
          logger.debugWithData(s"Message parsed", Map("message" -> parsedMsg.toString))
          parsedMsg match {
            case Right(mdsolUri) =>
                .recover {
                  case NonFatal(th) =>
                .map(res => SpanContext(span, SqsQueueMessageResult(q, message, res)))

            case Left(parsingError) =>
              logger.errorWithData(s"An Error has occurred while ", parsingError.th, Map.empty)
              Future.successful(SpanContext(span, SqsQueueMessageResult(q, message, Right(parsingError))))
and as for the span that’s been waiting for 17 days, it happens on my local machine with a new start of the app connecting to a locally run open-zipkin docker machine
Zvi Mints

Hey all, i have a problem with Kamon and Datadog
This is my configurations:

# Monitoring
kamon {
  datadog {
    flush-interval = 10 seconds
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
    time-units = ms
    memory-units = mb

    subscriptions {
          system-metric = [ ]
          http-server = [ ]

  metric {
    tick-interval = 10 seconds
    track-unmatched-entities = no
    filters.trace.includes = []

I connected to some Amazon Queues and Read messages from there, for each message that i insert to database I'm do:

Kamon.metrics.counter(s"message_pulled", Map("queueName" -> queueName)).increment()

BUT I'm getting that i have 100 messages in database for 12:20 for example but in datadog logs i have only 3, how its possible?

Paul Bernet
Can I expose data via JMX with the new "kamon-bundle" "2.1.0"
Corey Caplan
Hey all, is there a way to fix the "parent is missing" warning in Play Framework operations? It pretty much ruins tracing operations because you only get a fraction of the view into the HTTP request.
Oto Brglez

Hey guys! Long time no see,... :)

I'm trying to use Kamon (kamon-bundle, kamon-akka, kamnon-prometheus, kamon-status-page and kamon-amp-reporter) - version "2.1.0" on top of Scala 2.13 with Java 11. (openjdk version "11.0.2" 2019-01-15, OpenJDK Runtime Environment 18.9 (build 11.0.2+9), OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)). When I add kanela agent (1.0.5) and with very little bootup code in my Scala "App" class I get following error. Any ideas? Is this "my problem" or Kamon / Kamon bundle / AMP issue?

OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended

 _  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/

Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.5)

12:24:48.341 [main] INFO  kamon.status.page.StatusPage - Status page started on
12:24:48.734 [main] INFO  kamon.apm - Starting the Kamon APM Reporter. Your service will be displayed as [FeederApp] at https://apm.kamon.io/
12:24:49.544 [main] INFO  kamon.prometheus.PrometheusReporter - Started the embedded HTTP server on
12:24:49.841 [main] INFO  i.c.k.s.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values: 
  bearer.auth.token = [hidden]
  proxy.port = -1
  schema.reflection = false
  auto.register.schemas = true
  max.schemas.per.subject = 1000
  basic.auth.credentials.source = URL
  value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
  schema.registry.url = [http://localhost:8081]
  basic.auth.user.info = [hidden]
  proxy.host = 
  schema.registry.basic.auth.user.info = [hidden]
  bearer.auth.credentials.source = STATIC_TOKEN
  key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

java.lang.ClassCastException: class scala.util.Success cannot be cast to class kamon.instrumentation.context.HasContext (scala.util.Success and kamon.instrumentation.context.HasContext are in unnamed module of loader 'app')
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1(FutureChainingInstrumentation.scala:134)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1$adapted(FutureChainingInstrumentation.scala:134)
  at scala.Option.foreach(Option.scala:437)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.exit(FutureChainingInstrumentation.scala:134)
  at scala.concurrent.Future$.<clinit>(Future.scala:515)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$MetricsCollectionTask.schedule(ProcessMetricsCollector.scala:61)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$$anon$1.run(ProcessMetricsCollector.scala:40)
  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
  at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
  at java.base/java.lang.Thread.run(Thread.java:832)
[ERROR] [05/13/2020 12:24:50.987] [FeederApp-akka.actor.internal-dispatcher-6] [akka://FeederApp/system/IO-TCP/selectors] null
akka.actor.ActorInitializationException: akka://FeederApp/system/IO-TCP/selectors/$a: exception during creation
2 replies
Hm,... might be that I've put Kamon.init() after ActorSystem() 🤔If I swap that around I get this warning,...
Oto Brglez
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by kamon.instrumentation.executor.ExecutorInstrumentation$ (file:/Users/otobrglez/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/kamon/kamon-bundle_2.13/2.1.0/kamon-bundle_2.13-2.1.0.jar) to field java.util.concurrent.Executors$DelegatedExecutorService.e
WARNING: Please consider reporting this to the maintainers of kamon.instrumentation.executor.ExecutorInstrumentation$
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Oto Brglez
What does this WARNING mean? Should I be worried? :)
Barnabás Oláh

Hello all,
I'm using kamon-http4s on the client side, but I don't see the metrics propagated for any of the endpoints. However I got all the other metrics.
How I'm using it:

import kamon.http4s.middleware.client.KamonSupport
BlazeClientBuilder[F](ec).resource.map(KamonSupport(_)).use{ client =>
  //application code is here

When I'm debugging I can see the spans are created, but there is this do-not-sample value. Is it the problem? On the other hand, how can I change to it to make it sample? Sorry for the silly question.

Oto Brglez
Guys. I have another question. This is a very strange one.
Error:(60, 26) value increment is not a member of kamon.metric.Metric.Counter
7 replies
What am I missing here,... I have import kamon.Kamon at the top of my app; and I have kamon-bundle, kamon-core,... and Kamon.init works...
Hi all, which version is recommended to use in production, those v2.1.0 stable enough? which versions you use?
Diego Parra
Hi @/all, we want to share two awesome grafana + prometheus dashboards created by @cspinetta , enjoy!
Jakub Kozłowski
when would I use an entry vs a tag?

I would like to disable following headers from my requests

X-B3-Sampled: 1
X-B3-SpanId: c45fa42fbb6e01db
X-B3-TraceId: 19f91af53ab25fb2

I have following settings in my conf file:

kamon.instrumentation.akka.http.client.propagation.enabled = no
kamon.instrumentation.akka.http.server.propagation.enabled = no

But this doesnt have effect.
How to disable B3 headers ?

I am using kamon-bundle 2.1.0


John Watson
Is the source available for these demo services somewhere? https://apm.kamon.io/demo/demo/services?from=1589819640&to=1589825040&minutes=90
Ivan Topolnjak
good evening folks :wave:
Ivan Topolnjak
@jkwatson the actual version that is running up there is in a private repo but last year I extracted them for a workshop: https://github.com/ivantopo/beescala-workshop/tree/master/automatic
8 replies
@usagiy those settings look right to me.. have you tried maybe using kamon.instrumentation.http-client.default.propagation.enabled?
Ivan Topolnjak
@kubukoz I would say, if a String/Long/Boolean is enough for your use case then go with tags. Kamon automatically knows how to propagate tags across HTTP and binary channels. It is all handled out of the box. Entries allow you to put any type you want in the context, but out of the box it will only be propagated in the same process. You will need to create an entry reader/writer if you want any custom entry to be propagated through HTTP and binary channels.
I wish we could somehow unify all of that.. ultimately they are just key/value pairs
and most of the time people just want to use the tags.. only we bother with the entries for the Span
@moriyasror we are using 2.1.0 in production, I would recommend you do the same! We usually test new versions in our staging servers before we publish
Ivan Topolnjak
we don't use all the available instrumentation (for example, we don't use Mongo) but usually it works fine
^^ I know that "usually" doesn't sound very reassuring but I can say we didn't have any problems so far :)
Jakub Kozłowski
okay @ivantopo, I see. Thanks!
Srepfler Srdan
oh, Kamon on Lagom would be super cool, subscribing to kamon-io/Kamon#760
Sergey Morgunov
@schrepfler :+1: But need to wait until @ivantopo takes this module to Kamon family :joy:
Srepfler Srdan
well, it's obligatory now :D

@ivantopo unfortunately it doesn't help
this is my configuration

kamon.instrumentation.akka {
      ask-pattern-timeout-warning = heavyweight

      http {
        server {
          propagation {
            enabled = no
            channel = default
          metrics {
            enabled = yes
          tracing {
            enabled = yes
            span-metrics = on
            response-headers {
              trace-id = "X-Correlation-ID"

        client {
          propagation {
            enabled = no
          metrics {
            enabled = yes
          tracing {
            enabled = yes


kamon.instrumentation.http-client.default.propagation.enabled = no

but I still have headers

X-B3-Sampled: 1
X-B3-SpanId: 88d1ac6d6ac48d84
X-B3-TraceId: 72491d0bf36c6cbd
Thanks @ivantopo it works now. I had duplicated dependencies, don't know how is this connected
Ivan Topolnjak
@usagiy what do you mean by duplicate dependencies? maybe having both the bundle and the akka-http dependency?
Can kamon run with openJdk-11 ?
And can it tum with jdk-8 version ?
@ivantopo yes, I did it for experiment

@ivantopo just one comment, suggestion
on http server tracing you have covered

    .advise(method("bindAndHandle"), classOf[HttpExtBindAndHandleAdvice])

use case when we used api

    handler:   Flow[HttpRequest, HttpResponse, Any],
    interface: String, port: Int = DefaultPortForProtocol,
    connectionContext: ConnectionContext = defaultServerHttpContext,
    settings:          ServerSettings    = ServerSettings(system),
    log:               LoggingAdapter    = system.log)

but on some other places we used

IncomingConnection. handleWith[Mat](handler: Flow[HttpRequest, HttpResponse, Mat])(implicit fm: Materializer)

handler: Flow[HttpRequest, HttpResponse, Any] was same in both apis so it was trivial to change to bindAndHandle but it was confusing that one case worked and other not. It seems that would be same for you to cover IncomingConnection as handleWith, maybe I'm wrong