Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 13 19:37
    seglo commented #1061
  • Oct 13 19:31
    seglo edited #1061
  • Oct 13 19:19
    seglo opened #1061
  • Oct 12 22:52
    Voltir edited #1060
  • Oct 12 22:51
    Voltir opened #1060
  • Oct 08 15:16
    ivantopo synchronize #1057
  • Oct 08 15:16

    ivantopo on scala3-initial-support

    increase timeout for setup quer… (compare)

  • Oct 08 14:51

    ivantopo on scala3-initial-support

    increase wait time for Cassandr… (compare)

  • Oct 08 14:50
    ivantopo synchronize #1057
  • Oct 08 13:27
    ivantopo synchronize #1057
  • Oct 08 13:27

    ivantopo on scala3-initial-support

    try running on self-hosted acti… (compare)

  • Oct 08 12:52
    ivantopo synchronize #1057
  • Oct 08 12:52

    ivantopo on scala3-initial-support

    make the reporters cross-build … (compare)

  • Oct 08 11:18
    ivantopo synchronize #1057
  • Oct 08 11:18

    ivantopo on scala3-initial-support

    forward beforeAll and afterAll … (compare)

  • Oct 08 10:01
    ivantopo synchronize #1057
  • Oct 08 10:01

    ivantopo on scala3-initial-support

    add InitAndStopKamonAfterAll to… (compare)

  • Oct 08 09:41
    ivantopo synchronize #1057
  • Oct 08 09:41

    ivantopo on scala3-initial-support

    migrate SBT settings to slash s… try printing found Spans on the… (compare)

  • Oct 08 08:34
    ivantopo synchronize #1057
ramyareddy
@ramyareddy:matrix.org
[m]
1 reply
Hello everyone. I'm experimenting with Kamon 2.0, akka-http/akka and integration with Prometheus. I am not sure why I am getting this error "expected equal, got "INVALID" and showing end point is down Standalone Prometheus instance scraping metrics endpoint shows tartget as down due to this error: expected equal, got "INVALID"
Dominik Guggemos
@dguggemos
Hi, I'm trying out Kamon traces in combination with the W3C context propagation. In my very basic example (create a span, propagate it via W3C context an recreate it, create child span from it) I'm loosing the association to the parent span because when the W3C context is written, the id of the parent span is used instead of the id of the current span itself (see https://github.com/kamon-io/Kamon/blob/master/core/kamon-core/src/main/scala/kamon/trace/SpanPropagation.scala#L110). Is this correct and I misunderstand the concept here?
17 replies
Krisztian Lachata
@lachatak
Hi, good morning. I try to use kamon-datadog in 1 of our services in k8s. I configured it to use module agent and tracer pointing to our datadog agent running on all nodes. I see in the log that Kamon Datadog modules are started however nothing is really sent to the target. I tried to add a fake endpoint only for logging to see what is sent to the target but no data is sent at all. Can you help me to understand what might be the problem. No error, no logs even in TRACE level. Thank you
68 replies
imRentable
@imRentable

Hi, I recently started using kamon-prometheus and noticed that a counter metric always yielded 0 when querying it via the increase or rate function of PromQL. The reason for this was that the corresponding Kamon counter has been initialised just right before incrementing it. Therefore, no initial counter value of 0 has been exported. So I did some research and stumbled across this part of the Prometheus documentation: https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics
It recommends to initialise all metrics before using them. I'd like to do this but it seems very tedious/unrealistic to do it manually by calling every metric with every possible label combination at the start of my application. So, I wonder, is there some utility or configuration for kamon-prometheus that initializes all the metrics (or rather series) automatically so that initial values are exported?

Thx in advance!

5 replies
schrepfler
@schrepfler:matrix.org
[m]
I've noticed this Exception on application start when Kamon is instrumenting the kafka consumer, is this relevant/critical/known?
competitions-service-54cf8bf698-kb2pv competitions-service [application-akka.kafka.default-dispatcher-19] ERROR 2021-06-22 23:55:05  Logger : Error => org.apache.kafka.clients.consumer.KafkaConsumer with message Cannot locate field named groupId for class org.apache.kafka.clients.consumer.KafkaConsumer. Class loader: jdk.internal.loader.ClassLoaders$AppClassLoader@9e89d68: java.lang.IllegalStateException: Cannot locate field named groupId for class org.apache.kafka.clients.consumer.KafkaConsumer
5 replies
schrepfler
@schrepfler:matrix.org
[m]
When using Kamon with Lagom, as we don't control the topics directly, will Kamon know how to add the metadata on the topic
1 reply
Igmar Palsenberg
@igmar
Can I somehow find out why Kamon doesn't export certain metrics ?
1 reply
I know the timer gets started / stopped, but no export. Or sometime it is , sometimes it isn't.
Ben Iofel
@benwaffle
Anybody here have experience using the Datadog Java agent? Seems like we're being forced to switch to it from Kamon because Kamon's metrics count as custom (paid/limited) but Datadog's metrics count as built-in (free) for the same data (e.g. JVM GC count).
4 replies
Nitay Kufert
@nitayk
Hey, trying to upgrade kamon to 2.2.1 and getting this on services that are trying to connect to MySQL:
class com.mysql.jdbc.StatementImpl cannot be cast to class kamon.instrumentation.jdbc.HasDatabaseTags (com.mysql.jdbc.StatementImpl and kamon.instrumentation.jdbc.HasDatabaseTags are in unnamed module of loader 'app')
19 replies
Pankaj
@pankajb23

Hey guys,
we tried kamon-bundle 2.2.0 with scala/ guice/kafka application with proper tracing enabled in logback and also included JavaAgent .enablePlugins(PlayScala, JavaAgent, JavaAppPackaging) in build.sbt
but our trace/span sporadically appears/disappears for the application.

[warn][2021-07-01_14:04:07.083] [undefined|undefined] o.a.k.c.NetworkClient

any pointer people what we might be missing here

5 replies
shataya
@shataya
Hi, is it possible to exclude certain URLs from the Akka HTTP/Play tracing? We are using akka cluster bootstrap and there are many many traces with "/bootstrap/seed-nodes"
1 reply
sfsmicm
@sfsmicm
Hi all! I'm using hikaricp and in kamon-apm i can see thousands of calls from hikaricp to 2 operations "isValid" and "execute" - i assume its the validation of the jdbc connection prior a lease. I cannot find a hint in the kamon-jdbc doc about filtering this. Can somebody drop me a hint or point me to a documentation about this?
16 replies
Aditya Maheshwari
@adityamundra
Hi all, I am using kamon for tracing my akka-http requests. It is passing traceid and working fine when I use connection pool api - Http().singleRequest() but it's not passing traceid for connection level api Http(system).outgoingConnection() streaming api. Any idea how to pass the traceids using kamon for streaming connection level api?
5 replies
sfsmicm
@sfsmicm
How do you handle instance-identification (kamon.environment.service) when using akka cluster with e.g. 3 nodes? Do you use an index or just use the same service-name for every instance?
5 replies
Dima Golomozy
@DimaGolomozy

migrating from kamon 1.0 to 2.1
what happen to the filters? I want to filter out all metrics that not start with my prefix
in 1.0 did it with

filter {
includes = [
"MyPrefix.*"
]
}

in 2.1 its impossible?

3 replies
joel-apollo
@joel-apollo
Anyone have pointers where to look for Scala Futures with the wrong units when using Kamon with Play and DataDog? We're seeing a ton of errors like this and durations of 3 months or so...
Failed to record value [8784530422484846] on [span.elapsed-time,{operation=aws-s3.putObject,span.kind=internal,parentOperation=/v4/apiendpoint,error=false,component=scala.future}] because the value is outside of the configured range. The recorded value was adjusted to the highest trackable value [3600000000000]. You might need to change your dynamic range configuration for this metric
joel-apollo
@joel-apollo
Never mind, I think I found it
6 replies
Dima Golomozy
@DimaGolomozy
anyone have idea why when im using native-packager and adding Kanela in the javaAgents, it looks like all the properties in the conf file are ignored. for example, i've put
kanela.modules.akka.enabled = false
but i still see akka metrics
3 replies
schrepfler
@schrepfler:matrix.org
[m]
does Kamon work for Spring WebFlux endpoints?
1 reply
and same question for Reactor Kafka
Is it on the roadmap/in flight?
1 reply
Ivan Topolnjak
@ivantopo
@/all hey people! I wanted to give Discord a try and created a server here: https://discord.gg/5JuYsDJ7au :smile: I'm going to be hanging out for a few weeks and see how it turns out. You are welcome to join!
Krisztian Lachata
@lachatak
Hi. I have a question. I enabled kamon status page. I clearly see that there is a metric called jvm.memory.used. I use influx and datadog reporter (via agent). JVM Metric is available in influx, I can query in grafana but when I query it in datadog my service does not appear. Many other metrics work fine but I do not understand what is the case with this one. Can somebody help me how can figure out what is going on here? I try to migrate to datadog from influx/grafana. thank you
5 replies
Ben Fradet
@BenFradet
Hello I'm upgrading from "kamon-play-2.6" % "1.1.3" to kamon 2, so I have a "kamon-bundle" % "2.2.2" dependency as well as a addSbtPlugin("io.kamon" % "sbt-kanela-runner-play-2.8" % "2.0.9") sbt plugin.
However when launching the app, I run into java.lang.NoSuchMethodError: kamon.Kamon$.withContext any ideas how I can fix this?
3 replies
boriska-ta
@boriska-ta
Hi, I am new to Kamon, and I am trying to create nested scopes and/or spans - I need to timestmap and trace the executions of rather complicated requests, which are dispatched thorugh the chain of actors. In particular, I'd like to create nested scope befroe ending message to actor, and close scope after message is processed. Or, I would like to create nested scope when running spawining child operation (which can spawn nested scopes as well). I ma not very clear waht is the proper way to do that.
All exmaples in Kamon documentation imply that span finishes in the same block where it started, and there is very little information about Scope. My understanding, while I can get currentContext and currentSpan, there is no way to get current scope, so I'd have to sotre it in e.g dynamicVariable to close it when I need it.
Can you suggest waht is the proepr way to create nested spans and nested scopes and properly finish/close them - sometiems in different module that the one it started ?
Another question - if I need to obtain e.g request id for request which took longest time - how do I store this request id, via mark ? And how to see this mark in e.g. Datadog ?
6 replies
Linh Mai
@dl-mai

hi there

kamon.context.codecs.string-keys {
  request-id = "X-Request-ID"
}

seems to be deprectaded in kamon 2. is there an aquivalient

4 replies
jinghanx
@jinghanx
Hi, new to Kamon and I'm running into issue w/ reporting tracing to datadog, once in a while
2021-07-27 13:42:37 ERROR ModuleRegistry:218 - Reporter [DatadogSpanReporter] failed to process a spans tick. java.io.EOFException: \n not found: limit=0 content=… at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:332) ~[okio-jvm-2.8.0.jar:?] at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:178) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:106) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:79) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at kamon.okhttp3.instrumentation.KamonTracingInterceptor.intercept(KamonTracingInterceptor.scala:27) ~[kamon-bundle_2.13-2.2.0.jar:2.2.0] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) ~[okhttp-4.9.0.jar:?] at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) ~[okhttp-4.9.0.jar:?] at kamon.datadog.package$HttpClient.$anonfun$doRequest$1(package.scala:57) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at scala.util.Try$.apply(Try.scala:210) ~[scala-library-2.13.3.jar:?] at kamon.datadog.package$HttpClient.doRequest(package.scala:57) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at kamon.datadog.package$HttpClient.doMethodWithBody(package.scala:65) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at kamon.datadog.package$HttpClient.doPut(package.scala:86) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at kamon.datadog.package$HttpClient.doJsonPut(package.scala:96) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at kamon.datadog.DatadogSpanReporter.reportSpans(DatadogSpanReporter.scala:116) ~[kamon-datadog_2.13-2.2.0.jar:2.2.0] at kamon.module.ModuleRegistry.$anonfun$scheduleSpansBatch$1(ModuleRegistry.scala:217) ~[kamon-core_2.13-2.2.0.jar:2.2.0] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) ~[scala-library-2.13.3.jar:?] at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:671) ~[scala-library-2.13.3.jar:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430) [scala-library-2.13.3.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?]
7 replies
Shailesh Patil
@mineme0110
Hi
I am getting this error Any idea why it could be
Exception in thread "main" java.lang.ClassCastException: class ch.qos.logback.classic.spi.LoggingEvent cannot be cast to class kamon.instrumentation.context.HasContext (ch.qos.logback.classic.spi.LoggingEvent and kamon.instrumentation.context.HasContext are in unnamed module of loader 'app')81ca0ad52ddf4b6f837db7a502939ed0 Exception in thread "main" java.lang.ClassCastException: class ch.qos.logback.classic.spi.LoggingEvent cannot be cast to class kamon.instrumentation.context.HasContext (ch.qos.logback.classic.spi.LoggingEvent and kamon.instrumentation.context.HasContext are in unnamed module of loader 'app')
"io.kamon" %% "kamon-bundle" % 2.2.2 I am using this
I got this error with earlier version 2.1.11
DanielMao
@DanielMao1
Hi, I am new to kamon. I would like include this project as our akka system metric monitor. i would like to know if I have several nodes, and each of them running with kenela agent, what is the communication mechanism of their communication? are they send actor messages to each other?
Bruno Figueiredo Alves
@brunofigalves
Hi all, I'm using Akka 2.6 and Kamon 2.x and I pretend to gather metrics from Akka and export them to JMX however I don't think it's possible. Could you help me with this or suggest some workarounds?
boriska-ta
@boriska-ta
High all, have a question about cadinality for Kamon tags. Docs say, avoid high cadinality for "metric-related tags". Does "metric-related tag" here means span#tagMetrics - as opposed to Span#tag ? If yes, can I store high cadinality values in Span#tag without generating time series for each value combination ?
Giridhar Pathak
@gpathak
hey folks, i have an older application built on playframework 2.3. does kamon still support that?
or is there an older version of it i can use?
any direction would be helpful.
Tom Milner
@tmilner
Hey, has anyone experienced missing traces when using scala ZIO? I am using Kamon 2.1.4 with OpenTelemetry, and the app is a bit weird, but I basically have a Akka HTTP API, that runs a ZIO, which in turn calls an Akka HTTP Client (I know this is a weird setup). I see the main trace, and spans for things which happens before the ZIO is run but I do not see the traces for the Client call that happened in the ZIO, and it appears the service that was called did not get the trace ID either. I will keep digging, but if anyone has seen something like that or has an idea what it could be I am all ears.
3 replies
Ivan Topolnjak
@ivantopo

@/all hey folks, this is a reminder that we are migrating to Discord for questions and chat related to Kamon. You can join our Discord here: https://discord.gg/5JuYsDJ7au

Have a great week!

Zvi Mints
@ZviMints

i cannot find any metrics exposed via kamon-prometheus

application.conf:

kamon.prometheus {
  include-environment-tags = true
  embedded-server {
    hostname = 0.0.0.0
    port = 9404
  }
}

implementation:

class SinkConnector() extends org.apache.kafka.connect.sink.SinkConnector {
  val underlying: AerospikeSinkConnector = new AerospikeSinkConnector()
  override def start(map: util.Map[String, String]): Unit = {
    Kamon.init()
    Kamon.counter("testing-kamon").withoutTags().increment()
    try {
      underlying.start(map)
    }
    catch {
      case ex: Throwable =>
        println(s"Failure on underlying.start($map)")
        Kamon.counter("underlying-start-connector-failure").withTag("config-file",configFile).withTag("message", ex.getMessage).increment()
        throw ex
    }
    finally {
      Kamon.stopModules()
    }
  }

dependencies:

  "io.kamon" %% "kamon-prometheus" % "2.2.2" exclude("org.slf4j", "slf4j-api"),
  "io.kamon" %% "kamon-core" % "2.1.0" exclude("org.slf4j", "slf4j-api")

I already have JMX Exporter which expose Kafka metrics to 9404, i tried to make Kamon use this port also, when i remove the application.conf and use the default value of port 9095 i cannot port-forward to this port for some reason.

I'm missing something?

Thanks!

Zvi Mints
@ZviMints
Im getting 2021-08-31 12:03:52,224 WARN Failed to attach the instrumentation because the Kamon Bundle is not present on the classpath (kamon.Init) [connector-thread-dashboard-connector-profile] when i'm using "io.kamon" %% "kamon-prometheus" % "2.2.2" exclude("org.slf4j", "*") - any ideas why?
1 reply
Thomas Jaeckle
@thjaeckle

Hi. We are experiencing the following WARN message:

Failed to record value [-401488] on [span.processing-time,{operation=serialize,error=false}] because the value is outside of the configured range. The recorded value was adjusted to the highest trackable value [3600000000000]. You might need to change your dynamic range configuration for this metric

So the recorded value is negative. What we use is the Kamon SpanBuilder.start(Instant), however the span is later (within sub-milliseconds) finished via Span.finish() (where the underlying Clock is used to determine the nanos of the finish time)
Could it be that this "mixing" can cause negative values being recorded?

Thomas Jaeckle
@thjaeckle
ah, maybe instead of Instant.now() we should use Kamon.clock().instant() which provides better precision/performance?
6 replies
Dana Borinski
@dborinsk
Trying to add kamon metrics to my caffeine cache, can someone explain please the part of KamonStatCounter or give an example? i see this page in the docs Caffeine.newBuilder().recordStats(() -> new KamonStatsCounter("cache_name")).build(); but not sure i understand what is needed to be passed to the recordStats. i see it expects to get a supplier but this example isnt working so i probably miss something.
nikhilaroratgo
@nikhilaroratgo
Can Kamon instrument the play https connections? We have issue in PROD env where the heap memory increased after we deployed with https enabled. Kamon is throwing java.lang.OutOfMemoryError . I suspect that its because Kamon is not able to scrape the https connections. What is your opinion ?
Isaac Povey
@isaacpovey
Anyone know how to initialize Kamon with a play application loaded by guice. I followed the play guide but am getting these errors when trying to load it Caused by: java.lang.VerifyError: Expecting a stackmap frame at branch target 102. The application loader looks like this
class CustomApplicationLoader extends GuiceApplicationLoader { override protected def builder(context: Context): GuiceApplicationBuilder = super .builder(context) .eagerlyLoaded()
PrashantN86
@PrashantN86

I am trying to add traceability support to a play 2.8 application with Kamon and Jaeger. I followed [instructions here] (https://kamon.io/docs/latest/reporters/jaeger/) . I am able to see the startup logs for Kanela agent as well as the Jaeger reportes as follows

[info] Running the application with the Kanela agent

 _  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/

==============================
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.8)

--- (Running the application, auto-reloading is enabled) ---

[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9001

(Server started, use Enter to stop and go back to the console...)


2021-09-23 13:29:11,210 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - Reconfiguring Kamon with Play's Config
2021-09-23 13:29:11,211 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - play.core.server.AkkaHttpServerProvider
2021-09-23 13:29:11,213 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - 10 seconds
2021-09-23 13:29:11,573 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.j.JaegerReporter - Started the Kamon Jaeger reporter

Jaeger is started through a docker container with following command:

docker run -d --name jaeger1   -e COLLECTOR_ZIPKIN_HOST_PORT=:9411   -p 5775:5775/udp   -p 6831:6831/udp   -p 6832:6832/udp   -p 5778:5778   -p 16686:16686   -p 14268:14268   -p 14250:14250   -p 9411:9411   jaegertracing/all-in-one:1.25

None of the traces are visible when I try to access the APIs for my play application. Any configuration I am missing here?

Tommaso Schiavinotto
@Teudimundo
I'm trying to use Kamon-cassandra (v.2.2.2, kanela 1.0.11) . I'm interested in metrics, but the only one I find available at runtime is span_processing_time_seconds. Is there something I need to configure in order to get those listed in the documentation page?