Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jun 18 08:14

    SimunKaracic on master

    Bump Merge pull request #1039 from S… (compare)

  • Jun 18 08:14
    SimunKaracic closed #1039
  • Jun 17 14:36

    SimunKaracic on master

    delete the extracted Kanela fil… add exception handling for dele… Merge pull request #1036 from i… (compare)

  • Jun 17 14:36
    SimunKaracic closed #1036
  • Jun 17 14:36
    SimunKaracic commented #1036
  • Jun 17 13:33

    SimunKaracic on master

    try to bind Status Page on a di… Merge pull request #1042 from i… (compare)

  • Jun 17 13:33
    SimunKaracic closed #1042
  • Jun 17 13:33
    SimunKaracic commented #1042
  • Jun 17 11:49
    SimunKaracic opened #1044
  • Jun 17 11:11
    SimunKaracic closed #1040
  • Jun 17 11:11

    SimunKaracic on master

    Truncate 128-bit trace IDs befo… Merge pull request #1040 from d… (compare)

  • Jun 17 11:11
    SimunKaracic closed #1032
  • Jun 17 11:10

    SimunKaracic on master

    add hsqldb to the kamon-jdbc wi… Merge pull request #1041 from i… (compare)

  • Jun 17 11:10
    SimunKaracic closed #1041
  • Jun 16 15:55
    ivantopo opened #1043
  • Jun 15 14:53
    dbwiddis commented #884
  • Jun 15 12:11
    ivantopo commented #884
  • Jun 15 12:05
    ivantopo opened #1042
  • Jun 15 10:15
    ivantopo opened #1041
  • Jun 15 10:12
    ivantopo synchronize #1036

Hi guys,
I followed the basic setup for Play Framework. Everything seems to work except that I'm not seeing any traces at all:
Screen Shot 2020-12-24 at 16.00.41.png
Screen Shot 2020-12-24 at 16.01.59.png

I'm using:

// project/plugins.sbt
addSbtPlugin("io.kamon" % "sbt-kanela-runner-play-2.8" % "2.0.6")
// build.sbt dependencies:
"io.kamon" %% "kamon-bundle" % "2.1.0",
 "io.kamon" %% "kamon-apm-reporter" % "2.1.0"
// application.conf
kamon {
  environment.service = "myService"
  apm.api-key = "xxxxxxxxxxxxxxxxx"

Any idea what could be the reason?
I made a few hundred requests to make sure it's not due to sampling.

1 reply

Some more info on the message above:
In the status page http://localhost:5266/#/ I see:
Screen Shot 2020-12-24 at 16.17.56.png
Screen Shot 2020-12-24 at 16.22.22.png

so it looks to me like traces are being sent. Why can't I see them in Kamon APM?

6 replies
Cesar Alvernaz
Hi, I just notice I was missing the JVM metrics, even tho I have "kamon-system-metrics:2.1.9" and the kamon.modules { jvm-metrics.enabled = yes } enabled
8 replies
what can I possible be missing?
Wolfgang Bauer
Hey! Has anyone tried running Kamon on arm architecture?
Unfortunately the sigar-loader does not include a .so file for arm64/aarch64 and Kamon(1) fails with a NPE
4 replies
Cédric Gourlay
Hey there, I open a discussion https://github.com/kamon-io/Kamon/discussions/924 about the usage of aws xray for traces, if anyone have hints or ideas :D
I am not an expert on the different format of zipkin/jaeger and the xray format
Dmitriy Paramoshkin
Hi, I'm trying to setup monitoring for our akka cluster. One node (UI) is and running in tomcat, another nodes (compute) are normal java applications. I run them both using kanela javaagent.
I see message "Running with Kanela, the Kamon Instrumentation Agent" for both nodes.
Issue is if I run 2 compute nodes they can form a cluster, but if run ui node and compute they can't. Ui node will show
akka.remote.artery.Deserializer:90 - Failed to deserialize message from [unknown] with serializer id [17] and manifest [d]. akka.protobufv3.internal.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
Akka is 2.6.10, Kamon 2.1.9, Kanela agent 1.0.7
Am I missing something obvious?
7 replies
Ooh Croatia. Nice :)
Ben Spencer
hi, I've added kamon to an http4s app and I'm trying to add trace IDs to my logback logs
it works fine for logging within my service code, but not for http4's own access logs, even if I ensure that I add kamon support after the logging middleware
as I understand it middleware layers in http4s are basically just wrappers around the service so I don't understand how this could fail to work
3 replies
I've verified that the kamon context is preserved across IO chains within the service, even when they cross thread boundaries
Hi there! I have a beginner question related to a plain application. I followed the guide and put the Kamon.init() in my main object and the dependency into the build.sbt Unfortunately, when I deploy the docker image into the k8s cluster, I see that the instrumentation is not started: Instrumentation Disabled, Reporters 1 Started, Metrics 49 Metrics. Could you please give me a hint of what I'm missing?
4 replies
Is it normal for Kanela to take >10s to initialize? Is there any way to speed it up?
5 replies
Yaroslav Derman
Hello. Maybe someone can help me, locally all tests passed, but on the git they are failed
Yaroslav Derman
I told about kamon-io/Kamon#926, and I see that master build was failed too
I apologize if this has been asked before.
We use third party libraries that use Dropwizard. Our code however uses Kamon. What are the options to expose the metrics collected by the third party libraries that use DropWizard? Are there ways to expose the metrics collected by Dropwizard to Kamon so that the kamon metrics reporting infrastructure could be leveraged?
2 replies
Hi, I am having issues with the annotation instrumentation, on the status page it says 'Annotation Instrumentation ' - off my config is implementation 'io.kamon:kamon-bundle_2.13:2.1.10' implementation 'io.kamon:kamon-zipkin_2.13:2.1.10' implementation 'io.kamon:kamon-akka_2.13:2.1.10' implementation 'io.kamon:kamon-akka-http_2.13:2.1.10' implementation 'io.kamon:kamon-annotation_2.13:2.1.10' and the config is kanela modules.annotation.within = ["^com..*"]
2 replies
Hi There, am trying to get akka metrics using kamon. I am using following versions for bundle and prometheus -> val kamon_bundle = "2.0.5"
val kamon_prometheus = "2.0.1", am using akk version: 2.5.23. We could get metrics neat and clean when am running with java -jar for our executable. val kamonPrometheusReporter = PrometheusReporter() Kamon.registerModule("prometheus-reporter", kamonPrometheusReporter) Kamon.init() . However the moment am trying to bring up the app in my docker which is using amazon correto JRE, am not able to see akka metrics available at my prometheus server end point. I tried to bring up using kanela agent as well, version 1.0.4 java -javaagent:kanela-agent-1.0.4.jar, any pointers as to what could have gone wrong?
1 reply
Paolo Fabbro

Hi, I'm trying to instrument a microservice application using scala Lagom 1.6 but after configuring the dashboard I can't see actor's metrics.
The configuration that I used is:

val kamonBundle = "io.kamon" %% "kamon-bundle" % "2.1.0"
val kamonApmReporter = "io.kamon" %% "kamon-apm-reporter" % "2.1.0"

lazy val dataobject-impl = (project in file("dataobject-impl"))
.enablePlugins(LagomScala, JavaAgent)
libraryDependencies ++= Seq(
) ++ kamonDependencies,
javaAgents += "org.aspectj" % "aspectjweaver" % "1.9.2",
javaOptions in Universal += "-Dorg.aspectj.tracing.factory=default"

Any advice?

5 replies

hi! I used the version of Kamon as follows and set it as below.

compile group: 'com.typesafe.akka', name: "akka-actor-typed_2.13", version: "2.6.10"
compile "io.kamon:kamon-bundle_2.13:2.1.4"
compile "io.kamon:kamon-prometheus_2.13:2.1.4"

The operation of the server worked properly.
But what's wrong with not being able to track metrics related to the actor?

9 replies
eyal farago
hi guys, using akka 2.6.11 + kamon-bundle 2.1.10, I'm seeing repeating class-cast-exceptions related to RoutedActorCell and HasActorMonitor:
An error occurred while trying to apply an advisor: java.lang.ClassCastException: class akka.routing.RoutedActorCell cannot be cast to class kamon.instrumentation.akka.instrumentations.HasActorMonitor (akka.routing.RoutedActorCell and kamon.instrumentation.akka.instrumentations.HasActorMonitor are in unnamed module of loader 'app')
    at kamon.instrumentation.akka.instrumentations.HasActorMonitor$.actorMonitor(ActorInstrumentation.scala:95)
    at kamon.instrumentation.akka.instrumentations.SendMessageAdvice$.onEnter(ActorInstrumentation.scala:136)
    at akka.routing.RoutedActorCell.sendMessage(RoutedActorCell.scala:138)
    at akka.actor.Cell.sendMessage(ActorCell.scala:326)
    at akka.actor.Cell.sendMessage$(ActorCell.scala:325)
    at akka.actor.ActorCell.sendMessage(ActorCell.scala:410)
    at akka.actor.RepointableActorRef.$bang(RepointableActorRef.scala:178)
    at akka.io.SelectionHandler$SelectorBasedManager$$anonfun$workerForCommandHandler$1.applyOrElse(SelectionHandler.scala:118)
    at akka.actor.Actor.aroundReceive(Actor.scala:537)
    at akka.actor.Actor.aroundReceive$(Actor.scala:535)
    at akka.io.SelectionHandler$SelectorBasedManager.aroundReceive(SelectionHandler.scala:101)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577)
    at akka.actor.ActorCell.invoke(ActorCell.scala:547)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
    at akka.dispatch.Mailbox.run(Mailbox.scala:231)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
is this a kamon bug?
eyal farago
seems like the instrumentation doesn't apply to RouterActorCell but since its sendMessage method delegates to super.sendMessage it ends up invoking the adviced code which attempts the cast
override def sendMessage(envelope: Envelope): Unit = {
    if (routerConfig.isManagementMessage(envelope.message))
      router.route(envelope.message, envelope.sender)
2 replies
Stela L.
Hey there, I am using kamon 2.1.9, I get some weird errors every 0.10 seconds : " Failed to get information to use statvfs. path: /root/.cache/gvfs, Error code: 13", I mean it's producing metrics on <>:9095 port, but I would like to fix this, (gave a try to install libfilesys-statvfs-perl << I am using ubuntu 20, didn't work). Anyone seen and solved this?
1 reply
Zvi Mints

Hey! Im using

 lazy val kamon = Seq(
    "io.kamon" %% "kamon-system-metrics" % "0.6.7",
    "io.kamon" %% "kamon-scala" % "0.6.7",
    "io.kamon" %% "kamon-play-2.6" % "0.6.8",
    "io.kamon" %% "kamon-akka-2.5" % "0.6.8",
    "io.kamon" %% "kamon-datadog" % "0.6.8"
  ).map(_.exclude("org.asynchttpclient", "async-http-client"))

And having the following error on starting application:

java.io.FileNotFoundException: /opt/docker/native/libsigar-amd64-linux.so (No such file or directory)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
    at kamon.sigar.SigarProvisioner.provision(SigarProvisioner.java:172)
    at kamon.system.SystemMetricsExtension.<init>(SystemMetricsExtension.scala:52)
    at kamon.system.SystemMetrics$.createExtension(SystemMetricsExtension.scala:34)
    at kamon.system.SystemMetrics$.createExtension(SystemMetricsExtension.scala:32)
    at akka.actor.ActorSystemImpl.registerExtension(ActorSystem.scala:1006)
    at akka.actor.ExtensionId.apply(Extension.scala:79)
    at akka.actor.ExtensionId.apply$(Extension.scala:78)
    at kamon.system.SystemMetrics$.apply(SystemMetricsExtension.scala:32)
    at akka.actor.ExtensionId.get(Extension.scala:92)
    at akka.actor.ExtensionId.get$(Extension.scala:92)
    at kamon.system.SystemMetrics$.get(SystemMetricsExtension.scala:32)
    at kamon.ModuleLoaderExtension.$anonfun$new$3(ModuleLoader.scala:43)
    at scala.util.Success.$anonfun$map$1(Try.scala:255)
    at scala.util.Success.map(Try.scala:213)
    at kamon.ModuleLoaderExtension.$anonfun$new$2(ModuleLoader.scala:41)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kamon.ModuleLoaderExtension.<init>(ModuleLoader.scala:38)
    at kamon.ModuleLoader$.createExtension(ModuleLoader.scala:27)
    at kamon.ModuleLoader$.createExtension(ModuleLoader.scala:25)
    at akka.actor.ActorSystemImpl.registerExtension(ActorSystem.scala:1006)
    at kamon.Kamon$Instance._start$lzycompute(Kamon.scala:64)
    at kamon.Kamon$Instance._start(Kamon.scala:58)
    at kamon.Kamon$Instance.start(Kamon.scala:74)
    at kamon.Kamon$.start(Kamon.scala:41)
    at kamon.play.di.GuiceModule$KamonLoader.<init>(GuiceModule.scala:36)
    at kamon.play.di.GuiceModule$KamonLoader$$FastClassByGuice$$d6cb6187.newInstance(<generated>)

How can i disable it?

5 replies
Mark Dufresne
Hey folks, anyone know if kamon-datadog supports unix domain sockets ?
Mark Dufresne
also, anyone ever run into data-dog agent (same container as app) refusing connection?
2021-02-07 01:35:26.771 ERROR 1 --- [dogSpanReporter] kamon.module.ModuleRegistry              : Reporter [DatadogSpanReporter] failed to process a spans tick.

java.net.ConnectException: Connection refused (Connection refused)
Nikhil Arora
May I ask if Kamon can also monitor apache http client and okayhttpclient If I use it in Lagom ?
1 reply
@subrotosanyal hi!. Did you find any way to use Kamon in Tomcat environment?
Channing Walton
Hi, I've just upgraded kamon from 2.1.10 to 2.1.11 and am getting java.lang.NoClassDefFoundError: akka/http/Version$
The dependencies I have are kamon-bundle and kamon-influxdb.
What could I be missing?
4 replies
I'd like to link Kanela (1.0.7) to Tomcat (8) , could you give me a detailed guide?
I don't know what to do with the information below.
1.Patching catalina code Bootstrapper,and including Kamon.init there.
2.Include kamon/akka/scala/config libs in a special bootlib dir inside tomcat
3.Configure tomcat catalina.sh CLASSPATH to include bootlib dir
4.Remove kamon/akka/scala/config libs from my webapp WEB-INF/lib dir to prevent colissions
5.Put application.conf file in bootlib dir for kamon configuration
Giridhar Pathak
hey. to get kamon setup, do i need both the kanela agent and the aspectjweaver agent?
i am currently getting random data out of kamon's prometheus reporter. stuff like total actors running == 4881 while the application just spins up 2.
1 reply
this is a simple java application with a main function that spins up an actor system and 2 behaviors. (using akka type 2.6.12)
the metrics looks random.
currently using kamon-core, kamon-akka, kamon-promethues version 2.1.11 scala version 2.13
Alexis Hernandez
since an upgrade, I started getting this error, any ideas:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. [Attach Listener] ERROR 2021-02-12 11:41:02 Logger : Unable to start Kanela Agent. Please remove -javaagent from your startup arguments and contact Kanela support.: java.lang.NoClassDefFoundError: Could not initialize class org.apache.logging.log4j.util.PropertiesUtil at org.apache.logging.log4j.status.StatusLogger.<clinit>(StatusLogger.java:78) at org.apache.logging.log4j.LogManager.<clinit>(LogManager.java:60)
Alexis Hernandez
I have reverted from 2.1.12 until 2.1.8 (one by one) which used to work fine, and I wonder if this could be an incompatibility with sbt 1.4.7 or scala 2.12
5 replies
Alexis Hernandez
apparently, this occurs only in sbt, and it works when packaging my play app for production, is there a way to disable kamon when running it from sbt? I saw a PR got merged, that allows disabling kamon
Ben Iofel
Is kamon supposed to be instrumenting its own reporters? I see zipkin traces of data being sent to zipkin in an endless loop, even when my server is getting 0 requests. Can i turn that off?
2 replies
Alexis Hernandez
again, I started getting this issue, its getting really annoying:
2021-02-12 19:33:22,678 [WARN] from oshi.software.os.linux.LinuxOperatingSystem in Process Metrics - Failed to read process file: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java (deleted)
8 replies
Giridhar Pathak
hey! is it possible to have compile time weaving for kamon metrics instead of using the agent?

Hi there,
having trouble with the PrometheusPushgatewayReporter. I've setup a Scala sbt project with following versions:

val kamonApmReporter = "2.1.12"
val kamonBundle = "2.1.12"
val kamonPrometheus = "2.1.12"
val kanelaAgent = "1.0.7"

libraryDependencies ++= Seq(
   "io.kamon" %% "kamon-apm-reporter" % kamonPrometheus,
   "io.kamon" %% "kamon-bundle"       % kamonBundle,
   "io.kamon" %% "kamon-prometheus"   % kamonPrometheus

javaAgents += "io.kamon" % "kanela-agent" % kanelaAgent

Config looks like this:

kamon {
    environment {
        tags {
            app = "my-scala-job"
    modules {
        status-page {
            enabled = false
        apm-reporter {
            enabled = false
        host-metrics {
            enabled = false
        prometheus-reporter {
            enabled = false
        pushgateway-reporter {
            # activate pushgateway-reporter
            enabled = true
    prometheus {
        include-environment-tags = yes
        embedded-server.port = 4001

        # Settings relevant to the PrometheusPushgatewayReporter
        pushgateway {
            api-base-url = "http://localhost:9091/metrics"

            api-url = ${kamon.prometheus.pushgateway.api-base-url}"/job/my-scala-job"

kanela {
    show-banner = false

I do have a local docker container running for prom/pushgateway:v1.4.0. It is listing the metric, but only with the default push_time_seconds and push_failure_time_seconds gauges. No other metrics.

If I do

echo "mymetric 99" | curl --data-binary @- http://localhost:9091/metrics/job/my-push-job

it is displayed in pushgateway's UI. So, I don't expect the issue to be on pushgateways side.

If I activate the "normal" PrometheusReporter it shows the expected metrics, like akka etc.

Any ideas, where the issue could be?

Thx in advance.

2 replies
Hi, is there plan to support open-telemetry (the tracing side of it)? Conceptually seems like there isn't much difference from zipkin/jaeger but reality is often more complex so I want to understand what roadblocks are there. (I haven't looked deeply in to open telemetry yet)
Ben Iofel
Hey everyone. I added the zipkin reporter to our API proxy which handles as high as 180 req / sec / instance. After about 5 hours, the CPU got stuck at 200% struggling to garbage collect, with the heap size not dropping, causing significantly increased request latency, until the commit was reverted. We were using the default sampler (adaptive). Does anybody have any thoughts as to why this would happen?
4 replies