Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Mar 23 12:54
    ivantopo commented #1094
  • Mar 23 09:34
    ivantopo commented #1094
  • Mar 23 08:43
    ivantopo commented #1094
  • Mar 23 08:42
    ivantopo commented #1144
  • Mar 23 08:36
    ivantopo opened #1147
  • Mar 23 02:34
    mofei100 commented #1144
  • Mar 23 02:33
    mofei100 closed #1144
  • Mar 21 15:36
    seglo edited #1145
  • Mar 21 15:33
    seglo synchronize #1145
  • Mar 21 08:25
    sebarys opened #1146
  • Mar 20 18:16
    sebarys commented #793
  • Mar 20 18:14
    sebarys commented #793
  • Mar 19 23:41
    seglo opened #1145
  • Mar 18 14:33
    cmcmteixeira commented #1127
  • Mar 18 02:42
    mofei100 edited #1144
  • Mar 18 02:22
    mofei100 opened #1144
  • Mar 17 19:48
    hughsimpson commented #1127
  • Mar 17 19:10
    hughsimpson commented #1127
  • Mar 17 11:55
    ivantopo closed #1046
  • Mar 17 11:55
    ivantopo commented #1046
Ivan Topolnjak
are you seeing some extra dependencies being pulled?
David Leonhart
Oh ok, sry my bad. I was looking into the kamon build.sbt and saw that the kamon-bundle dependsOn a lot of modules and I mistakenly assumed that all those modules would be automatically pulled in if I add kamon-bundle as a dependency to my project.
I just checked the dependency tree of my project and I only see kamon-bundle and kamon-core beeing pulled in if I just specify kamon-bundle as a dependecy. So it looks fine.
I guess I need to explicitly specify all the modules I want as dependencies now. Which is exactly what I want. :-)
Ivan Topolnjak
David Leonhart
Maybe one more question.
Whats the benefit of using the kamon metrics instead of just using the prometheus java client metrics directly?
Do u consider the kamon metrics Counter, Gauge etc more advanced or is it just a matter of additional abstraction over the actual metrics implementation which is beneficial?
Ivan Topolnjak
from the pure metrics point of view, the main benefit would be that using Kamon allows you to move from Prometheus to other solutions, or use several solutions at the same time without having to change your instrumented code
and on top of that you add that Kamon also gives traces and context propagation :D
David Leonhart
And one more thing about my last question :-) I dont understand how the status page can work if I dont even see the kamon-status-page module beeing added?..
Ivan Topolnjak
you don't need to explicitly start it, if it is in the classpath then Kamon will pick it up

using kamon to report metrics from a play application (play 2.6.6)

libraryDependencies += "io.kamon" %% "kamon-bundle" % "2.1.0"
libraryDependencies += "io.kamon" %% "kamon-prometheus" % "2.1.0"

I create a custom metric that captures number of requests per customer account (there are a handful of those). I want to get the method called, and the http response status for each request. I initialize it:
code private val requestsByAccountCounter = Kamon.counter("requests_by_account")

I have an ActionBuilder where I increment the counter

requestsByAccountCounter.withTag("account", accountInHeader(request))
        .withTag("method", s"${uriBaseFromPath(request.path)}")
        .withTag("http_status", r.header.status).increment()

3 metrics are generated
requests_by_account_total{account="TestAccountId"} 0.0
requests_by_account_total{account="TestAccountId",method="/data/catalog"} 0.0
requests_by_account_total{account="TestAccountId",http_status="200",method="/data/catalog"} 34.0

I know I can increment each tag, but ideally I'd like only the 3rd metric to be generated.
How can I accomplish that?

1 reply
For others who may be as brain dead as me:
 val tagSet = TagSet.builder().add("account", accountFromHeader(request))
          .add("method", s"${contextFromRequestPath(request.path)}")
          .add("http_status", r.header.status).build()
Hi, After upgrading kamon libs from 1.x to 2.x I'm getting this on startup.,
Failed to read configuration for module [kamon-scala]
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'name'
The service-name is provided in app configuration and is already there before upgrade..
Couldn't find anything related to config 'name' in the migration guide. https://kamon.io/docs/latest/guides/migration/from-1.x-to-2.0/
1 reply
István Gansperger

Hi, I have a very strange issue with play's WSClient: when .url is called with a URL which contains spaces. The Kamon instrumented clients fails with:

java.net.URISyntaxException: Illegal character in path at index 22: http://localhost/some url with spaces
        at java.base/java.net.URI$Parser.fail(URI.java:2913)
        at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
        at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
        at java.base/java.net.URI$Parser.parse(URI.java:3114)
        at java.base/java.net.URI.<init>(URI.java:600)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri$lzycompute(StandaloneAhcWSRequest.scala:61)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri(StandaloneAhcWSRequest.scala:54)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$2.path(PlayClientInstrumentation.scala:82)
        at kamon.instrumentation.http.OperationNameSettings.operationName(OperationNameSettings.scala:9)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createClientSpan(HttpClientInstrumentation.scala:135)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createHandler(HttpClientInstrumentation.scala:105)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:43)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:40)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.execute(StandaloneAhcWSRequest.scala:219)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:264)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:260)
        at play.api.libs.ws.ahc.AhcWSRequest.get(AhcWSRequest.scala:246)
        ... own code
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)

but if Kamon is not running, it's working fine. Seems to be a bug in PlayClientInstrumentation to me. Does Kamon expect the url to be already encoded? WSClient seems to work fine with non encoded urls.

4 replies
Cédric Gourlay

Hey kamon people,
Thanks for this lib :D
I just try the lib in 2.1.3 and found some strange behaviour when overriding configuration in application.conf; especially the modules are not changed (reconfigured)
I think the method init with config is not correct, let me know if I understand correctly:

  • loadmodule will register the reconfigure so need to be setup before
  diff --git a/core/kamon-core/src/main/scala/kamon/Init.scala b/core/kamon-core/src/main/scala/kamon/Init.scala
index 75b4f224..0ac29bd1 100644
--- a/core/kamon-core/src/main/scala/kamon/Init.scala
+++ b/core/kamon-core/src/main/scala/kamon/Init.scala
@@ -41,8 +41,8 @@ trait Init { self: ModuleLoading with Configuration with CurrentStatus =>
   def init(config: Config): Unit = {
-    self.reconfigure(config)
+    self.reconfigure(config)

did I miss something? is it something known?

Cédric Gourlay
ok so it doesn't work.. no idea why my module is not reload, maybe I miss something
13 replies
Martin Vanek
Good evening gentlemen,
I have to upgrade one project using kamon 0.5.2 (really) to kamon 1.1 and later maybe to 2.x (project is currently stuck on akka 2.4.2 with spray-can)
We also use https://github.com/kamon-io/kamon-jmx to import some metrics from JMX and then export them together with kamon native metrics. Trouble is kamon-jmx only exist until version 0.6.7, not compatible with 1.0 and later. Is there a way to do what kamon-jmx does in version 1.0 or later? Thanks for any hints
2 replies
Zvi Mints
Hey, Im getting [kamon-akka.actor.default-dispatcher-2] [akka://kamon/user/metrics] null
With the following configurations, does any one knows why?
kamon {
  datadog {
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
  metric {
    tick-interval = 1 seconds
    track-unmatched-entities = yes
Daniel Leon
can someone take a look over kamon-io/Kamon#815, please?
@ivantopo , I implemented the solution I suggested to have access to the internal Kafka metrics by implementing the MetricsReporter listener from kafka-client.
PS: is there a repository for the kamon documentation, so I can add there a PR as well with the proposed addition to kamon-kafka ?
3 replies
Tim Spence

We’re running apps with kamon in Docker. If we base our apps on openjdk:8-jre-slim then we get the following errors:

java.lang.reflect.InvocationTargetException: null
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at kamon.Init.attachInstrumentation(Init.scala:57)
    at kamon.Init.attachInstrumentation$(Init.scala:52)
    at kamon.Kamon$.attachInstrumentation(Kamon.scala:19)
    at kamon.Init.init(Init.scala:43)
    at kamon.Init.init$(Init.scala:42)
    at kamon.Kamon$.init(Kamon.scala:19)
    at com.itv.vindler.Main$.<clinit>(Main.scala:17)
    at com.itv.vindler.Main.main(Main.scala)
Caused by: java.lang.IllegalStateException: No compatible attachment provider is available
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.install(ByteBuddyAgent.java:416)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:248)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:223)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:210)
    at kamon.bundle.Bundle$.$anonfun$attach$3(Bundle.scala:34)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at kamon.bundle.Bundle$.withInstrumentationClassLoader(Bundle.scala:88)
    at kamon.bundle.Bundle$.attach(Bundle.scala:34)
    at kamon.bundle.Bundle.attach(Bundle.scala)

If we base it on openjdk:8-jdk-slim instead then it’s fine. Does anyone have any suggestions? I didn’t think that the actual runtime would differ between the jre and jdk, the jre just shouldn’t have javac etc on the path

Ivan Topolnjak
hey @TimWSpence
Tim Spence
Hey @ivantopo :)
Ivan Topolnjak
having a JDK is a requirement only for attaching the agent in runtime, which is exactly what happens when you have the Kamon Bundle and call Kamon.init()
Tim Spence
Thanks. Why is a jdk necessary? And is there a way to do things at compile time instead?
(may be a stupid question)
Ivan Topolnjak
I don't know the details of why the specific parts that we need for that are not shipped on the JRE, but they are not! Those attachment providers are only availably on the JDK
not a stupid question!
so, if you want to use the JRE instead of a JDK, then add the -javaaagent:/path/to/kanela.jar startup parameter and it should work. If you are using SBT, I recommend using the sbt-javaagent plugin. There is a bit extra info here: https://kamon.io/docs/latest/guides/installation/setting-up-the-agent/
Tim Spence
haha thanks. AFAIK we only use it for metrics, rather than any automatic instrumentation. Is there a way we can initialize it that avoids needing the jdk? I prefer not to put that in production if possible
oh :+1:
Ivan Topolnjak
regarding doing all the instrumentation in compile time, I know there is a lot of work going on to support it in ByteBuddy, which is what we use under the hood for instrumentation
Tim Spence
oh nice. Yeah, that would be awesome
Ivan Topolnjak
I guess the GraalVM trend is bringing us there!
I very much would like to switch everything to build time, but that will take some extra time.. we will see!
Tim Spence
Good luck! :)
Tim Spence
btw @ivantopo I discovered a bit of the answer as to why the jdk was necessary. We were using kamon-bundlewhich autoloads the kanela agent but this requires the jvm attach API, which is only shipped with the jdk and not the jre. Which is why (as you said) explicitly enable the agent using -javaagent allows you to use the jre instead :)
hello all, we've recently updated our stack from using kamon 0.6 to 2.0 (big jump i know), we use the statsd reporter to ship metrics to an instance of statsd which forwards them to graphite. we've verified that the stats are being sent from the box using tcpdump and we've verified that the statsd box is correctly receiving those metrics, they look something like this metric.path=87.0|ms, for some reason ever since we did the update even though the metrics still appear in tcpdump, around 90% of them are getting lost between statsd and graphite, given that the only thing we changed that seems to impact this is the kamon version we're somewhat stumped. does anyone have any ideas? thanks in advance
i've tried to grab the udp message that tcpdump is reporting that kamon is sending externally and then i've sent a udp message myself to the same destination with the same content, doing that makes it appear in graphite. i don't understand why the generated packet from kamon-statsd doesn't seem to get picked up
Ivan Topolnjak
hey @decyg!
do you maybe have the tcpdumps of both messages?
and, do you see anything interesting in the drops column of cat /proc/net/udp?
the drops column is "0" for the three entries listed, i do have the dumps but unfortunately i can't share them as-is due to them containing some internal names/info but generally, each packet is 900-1000 bytes, starts with the usual headers and then contains one or more lines of kamon host/jdbc generated stats with our histograms interspersed within. i copied the body of one of the real generated tcpdumps into a file and sent is using cat test.dump > /dev/udp/our.ip.goes.here/8125 and noticed that it was successfully received and processed
thanks by the way, i really appreciate the assistance
Ivan Topolnjak
never mind, we are happy to help!
these UDP issues are also happening with the Datadog agent and I have been trying to figure out what is the issue