Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jun 11 16:10
    ihostage synchronize #760
  • Jun 10 13:26

    SimunKaracic on master

    Initial Lettuce instrumentation Merge pull request #1037 from S… (compare)

  • Jun 10 13:26
    SimunKaracic closed #1037
  • Jun 10 09:33
    SimunKaracic opened #1039
  • Jun 10 09:25

    SimunKaracic on master

    Update README.md Merge pull request #1038 from S… (compare)

  • Jun 10 09:25
    SimunKaracic closed #1038
  • Jun 10 09:25
    SimunKaracic opened #1038
  • Jun 10 09:22
    SimunKaracic commented #1025
  • Jun 10 09:03
    SimunKaracic commented #1025
  • Jun 10 09:01
    SimunKaracic labeled #1025
  • Jun 10 09:00
    SimunKaracic labeled #1012
  • Jun 10 09:00
    SimunKaracic labeled #1012
  • Jun 10 09:00
    SimunKaracic labeled #1020
  • Jun 10 08:59
    SimunKaracic ready_for_review #1037
  • Jun 10 08:59
    SimunKaracic review_requested #1037
  • Jun 09 12:06
    SimunKaracic synchronize #1037
  • Jun 09 11:38
    SimunKaracic opened #1037
  • Jun 07 15:53
    ivantopo edited #1036
  • Jun 07 15:52
    ivantopo opened #1036
  • Jun 07 10:10
    channingwalton commented #1027
Alexis Hernandez
again, I started getting this issue, its getting really annoying:
2021-02-12 19:33:22,678 [WARN] from oshi.software.os.linux.LinuxOperatingSystem in Process Metrics - Failed to read process file: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java (deleted)
8 replies
Giridhar Pathak
hey! is it possible to have compile time weaving for kamon metrics instead of using the agent?

Hi there,
having trouble with the PrometheusPushgatewayReporter. I've setup a Scala sbt project with following versions:

val kamonApmReporter = "2.1.12"
val kamonBundle = "2.1.12"
val kamonPrometheus = "2.1.12"
val kanelaAgent = "1.0.7"

libraryDependencies ++= Seq(
   "io.kamon" %% "kamon-apm-reporter" % kamonPrometheus,
   "io.kamon" %% "kamon-bundle"       % kamonBundle,
   "io.kamon" %% "kamon-prometheus"   % kamonPrometheus

javaAgents += "io.kamon" % "kanela-agent" % kanelaAgent

Config looks like this:

kamon {
    environment {
        tags {
            app = "my-scala-job"
    modules {
        status-page {
            enabled = false
        apm-reporter {
            enabled = false
        host-metrics {
            enabled = false
        prometheus-reporter {
            enabled = false
        pushgateway-reporter {
            # activate pushgateway-reporter
            enabled = true
    prometheus {
        include-environment-tags = yes
        embedded-server.port = 4001

        # Settings relevant to the PrometheusPushgatewayReporter
        pushgateway {
            api-base-url = "http://localhost:9091/metrics"

            api-url = ${kamon.prometheus.pushgateway.api-base-url}"/job/my-scala-job"

kanela {
    show-banner = false

I do have a local docker container running for prom/pushgateway:v1.4.0. It is listing the metric, but only with the default push_time_seconds and push_failure_time_seconds gauges. No other metrics.

If I do

echo "mymetric 99" | curl --data-binary @- http://localhost:9091/metrics/job/my-push-job

it is displayed in pushgateway's UI. So, I don't expect the issue to be on pushgateways side.

If I activate the "normal" PrometheusReporter it shows the expected metrics, like akka etc.

Any ideas, where the issue could be?

Thx in advance.

2 replies
Hi, is there plan to support open-telemetry (the tracing side of it)? Conceptually seems like there isn't much difference from zipkin/jaeger but reality is often more complex so I want to understand what roadblocks are there. (I haven't looked deeply in to open telemetry yet)
Ben Iofel
Hey everyone. I added the zipkin reporter to our API proxy which handles as high as 180 req / sec / instance. After about 5 hours, the CPU got stuck at 200% struggling to garbage collect, with the heap size not dropping, causing significantly increased request latency, until the commit was reverted. We were using the default sampler (adaptive). Does anybody have any thoughts as to why this would happen?
4 replies
reporter deployed at about 19:00, and CPU spikes at about 1:00, reporter removed at 8:00
Has someone tried running Kamon in java application, using akka framework ?
A. Alonso Dominguez
Hi all, probably this has been discussed earlier (in that case, sorry for bringing it back). We are using Kamon with logback context tags in a series of Akka HTTP microservices and we see that, sometimes, the logs don’t get populated with the additional context tags. It usually takes several restarts of a service to see the tags showing up in their values in the logs. We think it is related to this issue, is anyone aware of what could be the root cause? kamon-io/Kamon#919
Elias Court
Hi all - we're having an issue running the Kamon host-metrics module on the latest adoptopenjdk/openjdk8-openj9 alpine slim image (jdk8u282-b08_openj9-0.24.0-alpine-slim).
If we disable the host metrics module or go back to the previous image (jdk8u275-b01_openj9-0.23.0-alpine-slim) then everything appears to be fine.
The error we're getting is the following:
 _  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.6)
11:17:28.309 [main] INFO  kamon.status.page.StatusPage  - Status page started on -
Unhandled exception
Type=Segmentation error vmState=0x00040000
J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000080
Handler1=00007FC2C2C12DA0 Handler2=00007FC2C24F3020 InaccessibleAddress=0000000000000000
RDI=00007FC2817D85A0 RSI=00007FC2C3F31370 RAX=34F543EFC42A4CC0 RBX=00007FC2780139D0
RCX=00007FC278013AC0 RDX=0000000000000000 R8=00007FC2C406FF60 R9=00007FC27803C418
R10=00007FC2817DC9E0 R11=00007FC2A80F78C9 R12=00007FC2817D8648 R13=00007FC2A8105000
R14=0000000000000001 R15=00007FC2817DCBB0
RIP=00007FC2C3F31980 GS=0000 FS=0000 RSP=00007FC2817D85A0
EFlags=0000000000010206 CS=0033 RBP=0000000000000001 ERR=0000000000000000
TRAPNO=000000000000000D OLDMASK=0000000000000000 CR2=0000000000000000
xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm1 00ff000000000000 (f: 0.000000, d: 7.063274e-304)
xmm2 65677261742f3074 (f: 1949249664.000000, d: 3.040402e+180)
xmm3 2f30303a30303030 (f: 808464448.000000, d: 2.133265e-81)
Bartłomiej Wierciński

hi, I'm unable to compile the project locally because i'm getting

[error] lmcoursier.internal.shaded.coursier.error.FetchError$DownloadingArtifacts: Error fetching artifacts:
[error] file:/home/b.wiercinski/.m2/repository/io/netty/netty-transport-native-epoll/4.1.50.Final/netty-transport-native-epoll-4.1.50.Final-linux-x86_64.jar: not found: /home/b.wiercinski/.m2/repository/io/netty/netty-transport-native-epoll/4.1.50.Final/netty-transport-native-epoll-4.1.50.Final-linux-x86_64.jar

any hints?

running on fedora 33 with sbt 1.4.7
Bartłomiej Wierciński
Bartłomiej Wierciński
ok i've manually downloaded the missing file from https://repo1.maven.org/maven2/io/netty/netty-transport-native-epoll/4.1.50.Final/netty-transport-native-epoll-4.1.50.Final-linux-x86_64.jar , but it was strange that sbt didn't want to download the file by himself
Ramakrishna Hande

Hi , I am working with Kamon Zipkin to trace the request. The request has

1) call to database that returns Result as a Monix Task say Task[T]
2) I am using that result to make callls to a different webservice which is of type Future[HttpResponse]
3) Using result from 2, I am making another Database call

Now before step 1 trace_id is present, and it gets lost after step 1 and nothing after gets traced .

Now If I replace 1) by a static list of records instead of a DB call and then tracing happens successfully

"io.kamon" %% "kamon-core" % “2.1.9”,
"io.kamon" %% "kamon-scala-future" % “2.1.9"
"io.kamon" %% "kamon-executors" % “2.1.9”,
"io.kamon" %% "kamon-zipkin" % “2.1.9"
"io.kamon" %% "kamon-logback" % “2.1.9"

Is there any known issue with Monix tasks w.r.t tracing ???

Thanks in advance

Yaroslav Derman
Hello @SimunKaracic @ivantopo what can you say about kamon-io/Kamon#926 ?
4 replies
federico cocco
Hello. May I ask if there is any documentation available for https://mvnrepository.com/artifact/io.kamon/kamon-cats-io_2.13? Thanks!
2 replies
Would it be possible for the build to produce a kamon-bundle sources JAR?
1 reply
I mean currently there is one, but it's mostly empty
With the use of sbt-assembly and shading rules, I'm not sure if it would be easy (or possible), but it would be nice to have!
Hmm, an old unresolved SO question about the same...
Hi Team, I am trying to integrate the Kamon into the Lagom Microservice. At the time of server startup, I am seeing the below error and not able to see the metrics captured in the configured port. Please help me on this.
7 replies
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2021-03-15T08:22:21.461Z [ERROR][main] [CORR-ID -
] Init.attachInstrumentation 71 - Failed to attach the instrumentation agent
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at kamon.Init.attachInstrumentation(Init.scala:65)
at kamon.Init.attachInstrumentation$(Init.scala:60)
at kamon.Kamon$.attachInstrumentation(Kamon.scala:19)
at kamon.Init.init(Init.scala:36)
at kamon.Init.init$(Init.scala:35)
at kamon.Kamon$.init(Kamon.scala:19)
at kamon.Kamon.init(Kamon.scala)
at com.retisio.arc.account.impl.module.AccountModule.configure(AccountModule.java:38)
at com.google.inject.AbstractModule.configure(AbstractModule.java:61)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:344)
at com.google.inject.spi.Elements.getElements(Elements.java:103)
at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:173)
at com.google.inject.AbstractModule.configure(AbstractModule.java:61)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:344)
at com.google.inject.spi.Elements.getElements(Elements.java:103)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:137)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.google.inject.Guice.createInjector(Guice.java:78)
at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:200)
at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:155)
at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
at play.core.server.ProdServerStart$.start(ProdServerStart.scala:54)
at play.core.server.ProdServerStart$.main(ProdServerStart.scala:30)
at play.core.server.ProdServerStart.main(ProdServerStart.scala)
Caused by: java.lang.IllegalStateException: No compatible attachment provider is available
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.install(ByteBuddyAgent.java:416)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:248)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:223)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:210)
at kamon.bundle.Bundle$.$anonfun$attach$3(Bundle.scala:50)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at kamon.bundle.Bundle$.withInstrumentationClassLoader(Bundle.scala:104)
at kamon.bundle.Bundle$.attach(Bundle.scala:50)
at kamon.bundle.Bundle.attach(Bundle.scala)
... 29 common frames omitted
2021-03-15T08:22:21.817Z [INFO][main] [CORR-ID -
Paweł Kiersznowski

greetings everyone! i'm using Kamon 2.1.3 (kamon-bundle and kamon-datadog dependencies) with Play 2.7.3, I see that Kamon starts up successfully and it records metrics that were created by me, but it doesn't retrieve any span metrics - even though the Datadog span reporter is turned on. I don't see them either on Datadog or Kamon status page.

The span metrics stopped being recorded once I migrated from Kamon 1.x to 2 - it used to work just fine back then. is there anything I should add inside code besides the configuration in application.conf? thanks!

37 replies
Paweł Kiersznowski
Dmitriy Zakomirnyi
Hello team,
After some research in the documentation I didn't find the answer whether kamon is opentracing and/or opentelementry compiliant. Could please advise with that?
2 replies
Declan Neilson
morning all, i just saw this: https://blog.gradle.org/jcenter-shutdown, and noticed that io-kamon sbt-kanela-runner is only deployed to bintray (unless i'm missing something). is there any short term intent to deploying this to maven central or similar as well, and if not, is there a migration path documented anywhere?
4 replies
René Vangsgaard
Hello, I just configured a Play 2.8 project with Kamon, following the guide on the website. When running behind NGINX I get this error message in the log file.
[error] a.a.RepointableActorRef - Error in stage [kamon.instrumentation.akka.http.ServerFlowWrapper$$anon$1$$anon$2]: requirement failed: HTTP/1.0 responses must not have a chunked entity
java.lang.IllegalArgumentException: requirement failed: HTTP/1.0 responses must not have a chunked entity
    at scala.Predef$.require(Predef.scala:338)
    at akka.http.scaladsl.model.HttpResponse.<init>(HttpMessage.scala:518)
    at akka.http.scaladsl.model.HttpResponse.copyImpl(HttpMessage.scala:565)
    at akka.http.scaladsl.model.HttpResponse.withEntity(HttpMessage.scala:543)
    at kamon.instrumentation.akka.http.ServerFlowWrapper$$anon$1$$anon$2$$anon$5.onPush(ServerFlowWrapper.scala:164)
    at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541)
    at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:495)
    at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
    at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625)
    at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502)
15 replies
René Vangsgaard
I am using Kamon with a Play project. Can I disable tracing of specific URL mappings? For example no need to trace assets
1 reply
René Vangsgaard
Do the instrumenter support using Netty as Play backend?
1 reply
Dmitriy Zakomirnyi
I wonder whether jmx reporter is still supported? There is no Jmx reporter in the main Kamon repository, but there is a dedicated one - kamon-jmx and latest release is 0.6.7 dated Jun 2017.
11 replies
New release of kamon is here, featuring new Spring MVC and Spring WebClient support!
Try it out and let us know if you have any issues or feedback!
Darren Bishop

hi folks. have some questions that have bugged me for a while…

the context: is Monix-Reactive stream processing, where messages are buffered when back-pressure is detected; the source stream is grouped/partitioned (100 groups) and there is a back-pressure-buffer per group
i use a range-sampler to track the size of these buffers, incrementing by 1 as each event goes in, decrementing by the size of the batch of events that is released downstream i.e. when back-pressure is relaxed
also note that there are several threads across several instances running this stream

the questions:

  1. in cloudwatch/grafana/xxx, how best to interpret or represent that metric?
  2. how does charting the metric over a 1s vs 1m vs 5m period/range alter the interpretation?
  3. is it vital to have alignment between the tick-interval in kamon config and the period/range specified in the dashboard
  4. if tick-interval is 1mand the dashboard is set to 5m would a sum aggregation over report/represent the buffer-size i.e. will it be shown to be 5x bigger than it ever actually is?
30 replies
Srepfler Srdan
Hi all, what's the current state of support for Lagom, I'm not that interested for the hot reload functionality as much as actor processing metrics (persisted actors as well) and ideally tracing on the service endpoints and the api client parts
9 replies
Hi there
I got an akka actor that manages a queue. The size of the queue is tracked by a RangeSampler using increment/decrement... This works fine normally.
But now when this actor fails and a new queue is created I need to reset this RangeSampler back to 0 when the new actor with a fresh queue is started...
There is no reset() on RangeSampler and I do not know the current value so that I could do a decrement(currentValue).
Any ideas?
5 replies
Sebastien Vermeille
Hi there, I have a project using springboot + akka + kamon. Can we avoid using aspectjweaver for it to work ? Aspectjweaver cause some issues for us as we try to migrate to java15 now.
Any help is appreciated thank you
Sebastien Vermeille
ow Kanela is there now :D I'm a bit out of date will try that
50 replies
Any thought about the Grafana cloud and integrated logs/metrics/traces stack? Is there any way kamon support all of it?
2 replies
Hi Team, Can someone help me out on the Kamon Metric's Akka Actors Grafana dashboards with .json format.
2 replies
Sebastien Vermeille
Hello guys do you know if it is planned to support java16 in a near future for Kanela ?
5 replies
Dmitriy Zakomirnyi
I wonder if anyone succeeded with Kamon and akka-grpc server side?
Sherif Mohamed

Hello guys, sorry for the silly simple question, our team is trying to massively bump Kamon from 1.x to latest, basically replacing:

def aspectjWeaver(version: String = "1.8.2"): ModuleID = "org.aspectj" % "aspectjweaver" % version % "java-agent"
def kamonCore(version: String = "1.1.6"): ModuleID = "io.kamon" %% "kamon-core" % version
def kamonSystemMetrics(version: String = "1.0.1"): ModuleID = "io.kamon" %% "kamon-system-metrics" % version
def kamonAkka25(version: String = "1.1.2"): ModuleID = "io.kamon" %% "kamon-akka-2.5" % version
def kamonAkkaHttp25(version: String = "1.1.0"): ModuleID = "io.kamon" %% "kamon-akka-http-2.5" % version
def kamonAkkaRemote25(version: String = "1.1.0"): ModuleID = "io.kamon" %% "kamon-akka-remote-2.5" % version
def kamonScala(version: String = "1.0.0-RC4"): ModuleID = "io.kamon" %% "kamon-scala" % version
def kamonPrometheus(version: String = "1.1.1"): ModuleID = "io.kamon" %% "kamon-prometheus" % version
def kamonJdbc(version: String = "1.0.2"): ModuleID = "io.kamon" %% "kamon-jdbc" % version

With simply

def kamonBundle(version: String = "2.1.14"): ModuleID = "io.kamon" %% "kamon-bundle" % version
def kamonPrometheus(version: String = "2.1.14"): ModuleID = "io.kamon" %% "kamon-prometheus" % version

From what I saw I have two questions:

  • Is Kanela agent present alrady with the Kamon bundle?
  • More improtantly, we rely on akka 2.5.22 does the Kamon bump require a more recent dependency?

I am trying to estimate the work required for the bump, I can already see a lot of renamings!! Thanks in advance.

4 replies
Not sure if anyone has some hints but I am trying to use kanela 1.0.9 jar via java agent with "io.kamon" %% "kamon-bundle" % "2.1.14", "io.kamon" %% "kamon-datadog" % "2.1.14", I am getting traces and system metrics but not getting any of my actor metrics like akka.actor.mailbox-size. Akka version 2.5.23. I currently even have the doomsday wildcard enabled instrumentation{ akka { ask-pattern-timeout-warning = lightweight filters { actors { doomsday-wildcard = on track { includes = ["**"] ...
7 replies
Khalid Reid

Hello. I am having problems with my play framework (2.8.7) reporting spans to new relic. I have it working for my Akka HTTP app but for some reason the play app is only showing Metrics, not Spans. The Kanela agent is set up and I see the log below:

BatchDataSender configured with endpoint https://metric-api.newrelic.com/metric/v1

1 reply
Dmitriy Zakomirnyi
Keep getting /{}/{} operations despite kamon.http-server.default.tracing.operations.mappings and kamon.akka.http.server.tracing.operations.mapping overrides
1 reply
Tobias Eriksson
is there some article about the performance penalty / impact that Kamon and it's instrumentation has ?
1 reply
so mostly interrested in latency, but also added CPU and Mem needed