Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Mar 23 12:54
    ivantopo commented #1094
  • Mar 23 09:34
    ivantopo commented #1094
  • Mar 23 08:43
    ivantopo commented #1094
  • Mar 23 08:42
    ivantopo commented #1144
  • Mar 23 08:36
    ivantopo opened #1147
  • Mar 23 02:34
    mofei100 commented #1144
  • Mar 23 02:33
    mofei100 closed #1144
  • Mar 21 15:36
    seglo edited #1145
  • Mar 21 15:33
    seglo synchronize #1145
  • Mar 21 08:25
    sebarys opened #1146
  • Mar 20 18:16
    sebarys commented #793
  • Mar 20 18:14
    sebarys commented #793
  • Mar 19 23:41
    seglo opened #1145
  • Mar 18 14:33
    cmcmteixeira commented #1127
  • Mar 18 02:42
    mofei100 edited #1144
  • Mar 18 02:22
    mofei100 opened #1144
  • Mar 17 19:48
    hughsimpson commented #1127
  • Mar 17 19:10
    hughsimpson commented #1127
  • Mar 17 11:55
    ivantopo closed #1046
  • Mar 17 11:55
    ivantopo commented #1046
The service-name is provided in app configuration and is already there before upgrade..
Couldn't find anything related to config 'name' in the migration guide. https://kamon.io/docs/latest/guides/migration/from-1.x-to-2.0/
1 reply
István Gansperger

Hi, I have a very strange issue with play's WSClient: when .url is called with a URL which contains spaces. The Kamon instrumented clients fails with:

java.net.URISyntaxException: Illegal character in path at index 22: http://localhost/some url with spaces
        at java.base/java.net.URI$Parser.fail(URI.java:2913)
        at java.base/java.net.URI$Parser.checkChars(URI.java:3084)
        at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3166)
        at java.base/java.net.URI$Parser.parse(URI.java:3114)
        at java.base/java.net.URI.<init>(URI.java:600)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri$lzycompute(StandaloneAhcWSRequest.scala:61)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.uri(StandaloneAhcWSRequest.scala:54)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$2.path(PlayClientInstrumentation.scala:82)
        at kamon.instrumentation.http.OperationNameSettings.operationName(OperationNameSettings.scala:9)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createClientSpan(HttpClientInstrumentation.scala:135)
        at kamon.instrumentation.http.HttpClientInstrumentation$Default.createHandler(HttpClientInstrumentation.scala:105)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:43)
        at kamon.instrumentation.play.WSClientUrlInterceptor$$anon$1.apply(PlayClientInstrumentation.scala:40)
        at play.api.libs.ws.ahc.StandaloneAhcWSRequest.execute(StandaloneAhcWSRequest.scala:219)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:264)
        at play.api.libs.ws.ahc.AhcWSRequest.execute(AhcWSRequest.scala:260)
        at play.api.libs.ws.ahc.AhcWSRequest.get(AhcWSRequest.scala:246)
        ... own code
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)

but if Kamon is not running, it's working fine. Seems to be a bug in PlayClientInstrumentation to me. Does Kamon expect the url to be already encoded? WSClient seems to work fine with non encoded urls.

4 replies
Cédric Gourlay

Hey kamon people,
Thanks for this lib :D
I just try the lib in 2.1.3 and found some strange behaviour when overriding configuration in application.conf; especially the modules are not changed (reconfigured)
I think the method init with config is not correct, let me know if I understand correctly:

  • loadmodule will register the reconfigure so need to be setup before
  diff --git a/core/kamon-core/src/main/scala/kamon/Init.scala b/core/kamon-core/src/main/scala/kamon/Init.scala
index 75b4f224..0ac29bd1 100644
--- a/core/kamon-core/src/main/scala/kamon/Init.scala
+++ b/core/kamon-core/src/main/scala/kamon/Init.scala
@@ -41,8 +41,8 @@ trait Init { self: ModuleLoading with Configuration with CurrentStatus =>
   def init(config: Config): Unit = {
-    self.reconfigure(config)
+    self.reconfigure(config)

did I miss something? is it something known?

Cédric Gourlay
ok so it doesn't work.. no idea why my module is not reload, maybe I miss something
13 replies
Martin Vanek
Good evening gentlemen,
I have to upgrade one project using kamon 0.5.2 (really) to kamon 1.1 and later maybe to 2.x (project is currently stuck on akka 2.4.2 with spray-can)
We also use https://github.com/kamon-io/kamon-jmx to import some metrics from JMX and then export them together with kamon native metrics. Trouble is kamon-jmx only exist until version 0.6.7, not compatible with 1.0 and later. Is there a way to do what kamon-jmx does in version 1.0 or later? Thanks for any hints
2 replies
Zvi Mints
Hey, Im getting [kamon-akka.actor.default-dispatcher-2] [akka://kamon/user/metrics] null
With the following configurations, does any one knows why?
kamon {
  datadog {
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
  metric {
    tick-interval = 1 seconds
    track-unmatched-entities = yes
Daniel Leon
can someone take a look over kamon-io/Kamon#815, please?
@ivantopo , I implemented the solution I suggested to have access to the internal Kafka metrics by implementing the MetricsReporter listener from kafka-client.
PS: is there a repository for the kamon documentation, so I can add there a PR as well with the proposed addition to kamon-kafka ?
3 replies
Tim Spence

We’re running apps with kamon in Docker. If we base our apps on openjdk:8-jre-slim then we get the following errors:

java.lang.reflect.InvocationTargetException: null
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at kamon.Init.attachInstrumentation(Init.scala:57)
    at kamon.Init.attachInstrumentation$(Init.scala:52)
    at kamon.Kamon$.attachInstrumentation(Kamon.scala:19)
    at kamon.Init.init(Init.scala:43)
    at kamon.Init.init$(Init.scala:42)
    at kamon.Kamon$.init(Kamon.scala:19)
    at com.itv.vindler.Main$.<clinit>(Main.scala:17)
    at com.itv.vindler.Main.main(Main.scala)
Caused by: java.lang.IllegalStateException: No compatible attachment provider is available
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.install(ByteBuddyAgent.java:416)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:248)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:223)
    at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:210)
    at kamon.bundle.Bundle$.$anonfun$attach$3(Bundle.scala:34)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at kamon.bundle.Bundle$.withInstrumentationClassLoader(Bundle.scala:88)
    at kamon.bundle.Bundle$.attach(Bundle.scala:34)
    at kamon.bundle.Bundle.attach(Bundle.scala)

If we base it on openjdk:8-jdk-slim instead then it’s fine. Does anyone have any suggestions? I didn’t think that the actual runtime would differ between the jre and jdk, the jre just shouldn’t have javac etc on the path

Ivan Topolnjak
hey @TimWSpence
Tim Spence
Hey @ivantopo :)
Ivan Topolnjak
having a JDK is a requirement only for attaching the agent in runtime, which is exactly what happens when you have the Kamon Bundle and call Kamon.init()
Tim Spence
Thanks. Why is a jdk necessary? And is there a way to do things at compile time instead?
(may be a stupid question)
Ivan Topolnjak
I don't know the details of why the specific parts that we need for that are not shipped on the JRE, but they are not! Those attachment providers are only availably on the JDK
not a stupid question!
so, if you want to use the JRE instead of a JDK, then add the -javaaagent:/path/to/kanela.jar startup parameter and it should work. If you are using SBT, I recommend using the sbt-javaagent plugin. There is a bit extra info here: https://kamon.io/docs/latest/guides/installation/setting-up-the-agent/
Tim Spence
haha thanks. AFAIK we only use it for metrics, rather than any automatic instrumentation. Is there a way we can initialize it that avoids needing the jdk? I prefer not to put that in production if possible
oh :+1:
Ivan Topolnjak
regarding doing all the instrumentation in compile time, I know there is a lot of work going on to support it in ByteBuddy, which is what we use under the hood for instrumentation
Tim Spence
oh nice. Yeah, that would be awesome
Ivan Topolnjak
I guess the GraalVM trend is bringing us there!
I very much would like to switch everything to build time, but that will take some extra time.. we will see!
Tim Spence
Good luck! :)
Tim Spence
btw @ivantopo I discovered a bit of the answer as to why the jdk was necessary. We were using kamon-bundlewhich autoloads the kanela agent but this requires the jvm attach API, which is only shipped with the jdk and not the jre. Which is why (as you said) explicitly enable the agent using -javaagent allows you to use the jre instead :)
hello all, we've recently updated our stack from using kamon 0.6 to 2.0 (big jump i know), we use the statsd reporter to ship metrics to an instance of statsd which forwards them to graphite. we've verified that the stats are being sent from the box using tcpdump and we've verified that the statsd box is correctly receiving those metrics, they look something like this metric.path=87.0|ms, for some reason ever since we did the update even though the metrics still appear in tcpdump, around 90% of them are getting lost between statsd and graphite, given that the only thing we changed that seems to impact this is the kamon version we're somewhat stumped. does anyone have any ideas? thanks in advance
i've tried to grab the udp message that tcpdump is reporting that kamon is sending externally and then i've sent a udp message myself to the same destination with the same content, doing that makes it appear in graphite. i don't understand why the generated packet from kamon-statsd doesn't seem to get picked up
Ivan Topolnjak
hey @decyg!
do you maybe have the tcpdumps of both messages?
and, do you see anything interesting in the drops column of cat /proc/net/udp?
the drops column is "0" for the three entries listed, i do have the dumps but unfortunately i can't share them as-is due to them containing some internal names/info but generally, each packet is 900-1000 bytes, starts with the usual headers and then contains one or more lines of kamon host/jdbc generated stats with our histograms interspersed within. i copied the body of one of the real generated tcpdumps into a file and sent is using cat test.dump > /dev/udp/our.ip.goes.here/8125 and noticed that it was successfully received and processed
thanks by the way, i really appreciate the assistance
Ivan Topolnjak
never mind, we are happy to help!
these UDP issues are also happening with the Datadog agent and I have been trying to figure out what is the issue
for a bit more information, we're running m5.large boxes from aws, the main thing that surprises/confuses me is that sending the packet manually seems to work fine, is there some subtle difference with the packet constructed by kamon as to one just pipes to /dev/udp as above?
Ivan Topolnjak
one of the hypothesis I had was that UDP packets were being dropped because of how Kamon sends them: all at once. Every minute, when the tick arrives, Kamon sends all the UDP packets in one big wave.
i see, is there any way to space that out or to decrease/increase the tick rate?
Ivan Topolnjak
there is no built-in way to space them, but we could do that! spacing wouldn't change things too much I think, it might even increase the overall traffic
do you see anything that might support that hypothesis above? that UDP packets are indeed being sent, but dropped because there's too many of them at once?
curiously i've enabled tcpdump on the other side (our statsd box) and tried to confirm that it's receiving the stats and it seems to be getting the same packets that the source box is sending, just for some reason when it receives a packet that i send manually, it's fine, but a kamon sourced packet doesn't seem to get processed properly, i can't see anything in the logs for statsd that could indicate why the kamon sourced ones are being ignored
Ivan Topolnjak
and just to be sure, all was fine with 0.6?
Ivan Topolnjak
back then we were using Akka for sending out those messages, it might have introduced a little bit of jitter in the messages because of jumping across a couple actors before writing to the wire
i'm still somewhat convinced that it's the contents of the packet, i tried to spam sending the manual packet that was about 950~ bytes and each instance was received and processed
Ivan Topolnjak
but if it was something on the packet itself, why would it work when you send the very same package manually?