Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Mar 23 12:54
    ivantopo commented #1094
  • Mar 23 09:34
    ivantopo commented #1094
  • Mar 23 08:43
    ivantopo commented #1094
  • Mar 23 08:42
    ivantopo commented #1144
  • Mar 23 08:36
    ivantopo opened #1147
  • Mar 23 02:34
    mofei100 commented #1144
  • Mar 23 02:33
    mofei100 closed #1144
  • Mar 21 15:36
    seglo edited #1145
  • Mar 21 15:33
    seglo synchronize #1145
  • Mar 21 08:25
    sebarys opened #1146
  • Mar 20 18:16
    sebarys commented #793
  • Mar 20 18:14
    sebarys commented #793
  • Mar 19 23:41
    seglo opened #1145
  • Mar 18 14:33
    cmcmteixeira commented #1127
  • Mar 18 02:42
    mofei100 edited #1144
  • Mar 18 02:22
    mofei100 opened #1144
  • Mar 17 19:48
    hughsimpson commented #1127
  • Mar 17 19:10
    hughsimpson commented #1127
  • Mar 17 11:55
    ivantopo closed #1046
  • Mar 17 11:55
    ivantopo commented #1046
Ivan Topolnjak
hey there
yeap, it is free
no restrictions at all
basically, if you can find it on our github then its free :joy:
Thanks Ivan. I was little bit confused on this https://kamon.io/apm/pricing/, Starter: up to 2 instances per service,up to 3 services, but as I understand it is if we use APM.
Ivan Topolnjak
correct, that pricing is only for APM
ok. thank you.
Vish Ramachandran
@ivantopo A few weeks earlier, I had asked a question about draining metrics to reporter before shutdown. The suggestion was to run Kamon.stopModules() from a shutdown hook to ensure that all metrics are drained to reporters and backends. I precisely did that and waited for the returned future to complete, but every now and then, I see missed samples. There is clear evidence from logs that the metric was published and stopModules call finished successfully, but the sample that was incremented moments before shutdown was not drained to reporters. Any other calls to be performed to drain ?
10 replies
Alexis Hernandez
I have this setting for the module logging slow queries but it seems it's ignored by kamon, any ideas?
kamon {
  instrumentation.jdbc.statements.threshold = 5 seconds
4 replies
Challen Herzberg-Brovold
Hello everyone
I am trying to use Kamon with an http4s app to report metrics to New Relic, and I have a couple questions:
1) The documentation says to call Kamon.init() as the very first operation in your code. I am deploying my service as a war, and I'm getting an java.lang.reflect.InvocationTargetException when I call it in the Context Listener. Where should I be calling it?
2) When I'm using the Kamon new relic reporter, does it use new relic transactions at all? I.e. will my service be able to integrate with New Relic's service map?
6 replies
Thank you!
Challen Herzberg-Brovold
kamon-io/Kamon#611 makes me think not, but wondering if this has been addressed since then.
David Leonhart
Is there some up2date documentation howto configure kamon 2.1.x for lagom 1.6.x?
I am currently running into the same problems as described here: https://github.com/kamon-io/Kamon/issues/563#issuecomment-562867888
Kamon works when running from sbt but it doesnt work with production configuration.
1 reply
hello, there's nothing like include-environment-tags for the Zipkin reporter?
4 replies
Franco Albornoz
Hey guys, is there a way to enable the kamon-play for play 2.8 instrumentation without using kamon-bundle?, and instead using kamon-core + kamon-play?
4 replies
Daniel Leon
Hello! Has anyone needed kafka metrics so far ? I see kamon-kafka hasn't been imported into the main project

Hi! Is there a way to specify custom bucket limits? for example i used seconds Kamon.histogram("tsr-last-trained-point", MeasurementUnit.time.seconds) and i get buckets like this

tsr_last_trained_point_seconds_bucket{le="0.005",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.01",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.025",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.05",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.075",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.1",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.25",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.5",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="0.75",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="1.0",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="2.5",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="5.0",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="7.5",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="10.0",granularity="HOUR"} 0.0
tsr_last_trained_point_seconds_bucket{le="+Inf",granularity="HOUR"} 50.0
tsr_last_trained_point_seconds_count{granularity="HOUR"} 50.0
tsr_last_trained_point_seconds_sum{granularity="HOUR"} 102740.0

But i want buckets like 3600 (hour), 86100(day), week and etc... How can i define histogram like this?

3 replies
Ivan Topolnjak
dear peoplez!
Daniel Leon
@ivantopo any plans to migrate kamon-kafka into the main repository?
19 replies
David Leonhart

I already got kamon working using the full ?kamon-bundle but I can see that it adds a lot of dependencies which I dont need/want.
Whats the best way to pick only the things I need put still get the same auto instrumentation as if I would use the full bundle?

Lets say I only want to use: kamon-executors, kamon-scala-future, kamon-cassandra.
I think I could just exclude all other modules via libraryExclusions but imo the other way around, including only what I want, would be better.

Ivan Topolnjak
in theory, kamon-cassandra should only bring up kamon-core, same with kamon-scala-future
oh, and common
are you seeing some extra dependencies being pulled?
David Leonhart
Oh ok, sry my bad. I was looking into the kamon build.sbt and saw that the kamon-bundle dependsOn a lot of modules and I mistakenly assumed that all those modules would be automatically pulled in if I add kamon-bundle as a dependency to my project.
I just checked the dependency tree of my project and I only see kamon-bundle and kamon-core beeing pulled in if I just specify kamon-bundle as a dependecy. So it looks fine.
I guess I need to explicitly specify all the modules I want as dependencies now. Which is exactly what I want. :-)
Ivan Topolnjak
David Leonhart
Maybe one more question.
Whats the benefit of using the kamon metrics instead of just using the prometheus java client metrics directly?
Do u consider the kamon metrics Counter, Gauge etc more advanced or is it just a matter of additional abstraction over the actual metrics implementation which is beneficial?
Ivan Topolnjak
from the pure metrics point of view, the main benefit would be that using Kamon allows you to move from Prometheus to other solutions, or use several solutions at the same time without having to change your instrumented code
and on top of that you add that Kamon also gives traces and context propagation :D
David Leonhart
And one more thing about my last question :-) I dont understand how the status page can work if I dont even see the kamon-status-page module beeing added?..
Ivan Topolnjak
you don't need to explicitly start it, if it is in the classpath then Kamon will pick it up

using kamon to report metrics from a play application (play 2.6.6)

libraryDependencies += "io.kamon" %% "kamon-bundle" % "2.1.0"
libraryDependencies += "io.kamon" %% "kamon-prometheus" % "2.1.0"

I create a custom metric that captures number of requests per customer account (there are a handful of those). I want to get the method called, and the http response status for each request. I initialize it:
code private val requestsByAccountCounter = Kamon.counter("requests_by_account")

I have an ActionBuilder where I increment the counter

requestsByAccountCounter.withTag("account", accountInHeader(request))
        .withTag("method", s"${uriBaseFromPath(request.path)}")
        .withTag("http_status", r.header.status).increment()

3 metrics are generated
requests_by_account_total{account="TestAccountId"} 0.0
requests_by_account_total{account="TestAccountId",method="/data/catalog"} 0.0
requests_by_account_total{account="TestAccountId",http_status="200",method="/data/catalog"} 34.0

I know I can increment each tag, but ideally I'd like only the 3rd metric to be generated.
How can I accomplish that?

1 reply
For others who may be as brain dead as me:
 val tagSet = TagSet.builder().add("account", accountFromHeader(request))
          .add("method", s"${contextFromRequestPath(request.path)}")
          .add("http_status", r.header.status).build()
Hi, After upgrading kamon libs from 1.x to 2.x I'm getting this on startup.,
Failed to read configuration for module [kamon-scala]
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'name'
The service-name is provided in app configuration and is already there before upgrade..
Couldn't find anything related to config 'name' in the migration guide. https://kamon.io/docs/latest/guides/migration/from-1.x-to-2.0/
1 reply
István Gansperger

Hi, I have a very strange issue with play's WSClient: when .url is called with a URL which contains spaces. The Kamon instrumented clients fails with:

java.net.URISyntaxException: Illegal character in path at index 22: http://localhost/some url with spaces
        at java.base/java.net.URI$Parser.fail(URI.java:2913)
        at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
        at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
        at java.base/java.net.URI$Parser.parse(URI.java:3114)
        at java.base/java.net.URI.<init>(URI.java:600)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri$lzycompute(StandaloneAhcWSRequest.scala:61)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri(StandaloneAhcWSRequest.scala:54)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$2.path(PlayClientInstrumentation.scala:82)
        at kamon.instrumentation.http.OperationNameSettings.operationName(OperationNameSettings.scala:9)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createClientSpan(HttpClientInstrumentation.scala:135)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createHandler(HttpClientInstrumentation.scala:105)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:43)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:40)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.execute(StandaloneAhcWSRequest.scala:219)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:264)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:260)
        at play.api.libs.ws.ahc.AhcWSRequest.get(AhcWSRequest.scala:246)
        ... own code
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)

but if Kamon is not running, it's working fine. Seems to be a bug in PlayClientInstrumentation to me. Does Kamon expect the url to be already encoded? WSClient seems to work fine with non encoded urls.

4 replies
Cédric Gourlay

Hey kamon people,
Thanks for this lib :D
I just try the lib in 2.1.3 and found some strange behaviour when overriding configuration in application.conf; especially the modules are not changed (reconfigured)
I think the method init with config is not correct, let me know if I understand correctly:

  • loadmodule will register the reconfigure so need to be setup before
  diff --git a/core/kamon-core/src/main/scala/kamon/Init.scala b/core/kamon-core/src/main/scala/kamon/Init.scala
index 75b4f224..0ac29bd1 100644
--- a/core/kamon-core/src/main/scala/kamon/Init.scala
+++ b/core/kamon-core/src/main/scala/kamon/Init.scala
@@ -41,8 +41,8 @@ trait Init { self: ModuleLoading with Configuration with CurrentStatus =>
   def init(config: Config): Unit = {
-    self.reconfigure(config)
+    self.reconfigure(config)

did I miss something? is it something known?

Cédric Gourlay
ok so it doesn't work.. no idea why my module is not reload, maybe I miss something
13 replies
Martin Vanek
Good evening gentlemen,
I have to upgrade one project using kamon 0.5.2 (really) to kamon 1.1 and later maybe to 2.x (project is currently stuck on akka 2.4.2 with spray-can)
We also use https://github.com/kamon-io/kamon-jmx to import some metrics from JMX and then export them together with kamon native metrics. Trouble is kamon-jmx only exist until version 0.6.7, not compatible with 1.0 and later. Is there a way to do what kamon-jmx does in version 1.0 or later? Thanks for any hints
2 replies
Zvi Mints
Hey, Im getting [kamon-akka.actor.default-dispatcher-2] [akka://kamon/user/metrics] null
With the following configurations, does any one knows why?
kamon {
  datadog {
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
  metric {
    tick-interval = 1 seconds
    track-unmatched-entities = yes
Daniel Leon
can someone take a look over kamon-io/Kamon#815, please?
@ivantopo , I implemented the solution I suggested to have access to the internal Kafka metrics by implementing the MetricsReporter listener from kafka-client.
PS: is there a repository for the kamon documentation, so I can add there a PR as well with the proposed addition to kamon-kafka ?
3 replies
Tim Spence

We’re running apps with kamon in Docker. If we base our apps on openjdk:8-jre-slim then we get the following errors:

java.lang.reflect.InvocationTargetException: null
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at kamon.Init.attachInstrumentation(Init.scala:57)
    at kamon.Init.attachInstrumentation$(Init.scala:52)
    at kamon.Kamon$.attachInstrumentation(Kamon.scala:19)
    at kamon.Init.init(Init.scala:43)
    at kamon.Init.init$(Init.scala:42)
    at kamon.Kamon$.init(Kamon.scala:19)
    at com.itv.vindler.Main$.<clinit>(Main.scala:17)
    at com.itv.vindler.Main.main(Main.scala)
Caused by: java.lang.IllegalStateException: No compatible attachment provider is available
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.install(ByteBuddyAgent.java:416)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:248)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:223)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:210)
    at kamon.bundle.Bundle$.$anonfun$attach$3(Bundle.scala:34)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at kamon.bundle.Bundle$.withInstrumentationClassLoader(Bundle.scala:88)
    at kamon.bundle.Bundle$.attach(Bundle.scala:34)
    at kamon.bundle.Bundle.attach(Bundle.scala)

If we base it on openjdk:8-jdk-slim instead then it’s fine. Does anyone have any suggestions? I didn’t think that the actual runtime would differ between the jre and jdk, the jre just shouldn’t have javac etc on the path

Ivan Topolnjak
hey @TimWSpence
Tim Spence
Hey @ivantopo :)