Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 21 12:34

    ivantopo on v2.3.0

    remove bintray variables and cs… (compare)

  • Oct 21 12:34

    ivantopo on master

    remove bintray variables and cs… (compare)

  • Oct 21 11:27
    dpsoft commented #1060
  • Oct 21 10:28

    ivantopo on v2.3.0

    (compare)

  • Oct 21 09:44

    ivantopo on master

    remove the deprecated future ch… (compare)

  • Oct 21 09:26

    ivantopo on scala3-initial-support

    (compare)

  • Oct 21 09:26

    ivantopo on master

    Scala 3 Initial support (#1057)… (compare)

  • Oct 21 09:26
    ivantopo closed #1057
  • Oct 21 08:45

    ivantopo on master

    Test duplicate operation names … handle route rejections in the … (compare)

  • Oct 21 08:45
    ivantopo closed #1063
  • Oct 21 08:29
    ivantopo opened #1064
  • Oct 20 19:29
    dpsoft commented #1060
  • Oct 20 15:37
    SimunKaracic commented #1060
  • Oct 19 19:10
    seglo commented #1063
  • Oct 19 18:35
    seglo commented #1061
  • Oct 19 17:24
    ivantopo commented #1063
  • Oct 19 17:21
    ivantopo synchronize #1063
  • Oct 18 23:05
    seglo closed #1061
  • Oct 18 23:05
    seglo commented #1061
  • Oct 18 19:16
    seglo edited #1063
Jason Pickens
@steinybot
Is that in the docs somewhere?
1 reply
Michael Morris
@micmorris
Hey all, I'm trying to get a simple Kamon setup pushing Zipkin traces on Akka-Http, but I'm running into this error: java.lang.ClassCastException: class akka.dispatch.Envelope cannot be cast to class kamon.instrumentation.context.HasContext
I've thrown so many configs at this, I'm not sure where to start
I've made sure that Kamon is the first thing loaded as far as I can see
18 replies
Jason Pickens
@steinybot
Have you tried adding the agent manually? For a while I thought I was calling Kamon.init early enough but that can be easier said than done. You could put a breakpoint in the ActorSystem initialisation somewhere and in the Kamon initialisation to double check which is hit first.
Michael Morris
@micmorris
Added some details into the reply thread above :)
I might try breakpointing and seeing if they do get called in order, sure. What do you mean by "manually" adding the agent?
Yaroslav Derman
@yarosman
@ivantopo @SimunKaracic Does kamon require additional configuration for Completable Future ? Because it seems to me that it doesn't work correct with caffeine-async-cache
3 replies
alexander-branevskiy
@alexander-branevskiy
probably it's related to this one kamon-io/Kamon#829
10 replies
Jason Pickens
@steinybot
In local development I am using Jaeger. Traces don’t always appear. I’m not sure if they never do or they just show up late. How can I figure out what the problem might be?
Jason Pickens
@steinybot
Oh I see in the thread above haha. "They're getting sampled, which means that Kamon is deciding whether to trace that operation or not. If it actually traced all your requests, there would be a significant performance penalty. It always trips people up when they try Kamon for the first time.” Yep that tripped me up
Jason Pickens
@steinybot
It’s strange how it always seems to be the first few traces upon startup that are missing. I would have expected the adaptive sampler to keep these?
2 replies
Michael Morris
@micmorris

Hey all, I'm trying to filter out some health-checks with trace groups, but everything is still getting sent regardless :/

trace {
    sampler = ${?TRACE_SAMPLING}
    random-sampler.probability = ${?TRACE_SAMPLING_RANDOM_PROBABILITY}
    adaptive-sampler {
      groups {
        health-checks {
          operations = ["GET /is_initialized"]
          rules {
            sample = never
          }
        }
        metric-checks {
          operations = ["GET /prometheus"]
          rules {
            sample = never
          }
        }
        my-endpoint {
          operations = ["GET /my-endpoint", "POST /my-endpoint"]
          rules {
            sample = always
          }
        }
      }
    }
  }

TRACE_SAMPLING = adaptive

I can't find the logic where the operations are parsed out and related to Akka-http endpoints, I bet I could find something if I knew that location!
Michael Morris
@micmorris
Found it, the docs were messing with me again...
The operations never have the verb in front operations = ["GET \/status"], it's just operations = ["\/status"]
2 replies
Jason Pickens
@steinybot
What is the best way to find out why a thread has a context which I don’t think it should?
4 replies
David Knapp
@Falmarri
for the prometheus-reporter. the default configured
information-buckets = [ 512, 1024, 2048, 4096, 16384, 65536, 524288, 1048576 ] seems really small. the top bucket is only 1 megabyte. so basically all jvm.memory.* histograms get set to +Inf. is there a reason for these buckets to be set like this? should the reporter include custom buckets for jvm.* metrics?
2 replies
alexander-branevskiy
@alexander-branevskiy
Hello guys! Does Akka instrumentation support actor Stash?
3 replies
Rajat Khandelwal
@prongs

kamon-io/sbt-kanela-runner#18

can somebody take this up? Due to bintray sunset, our project has started to fail while building.

Ben Rice
@Rendrik
Hi, I've added Kamon to our project, and things are going smoothly. However we have a couple unruly noisy processes. One out of my control makes JDBC calls quite often that I'd like to just ignore.
Following some filtering doc here https://kamon.io/docs/latest/core/utilities/ However I'm unclear on exactly what the string is I'm filtering on? Is it possible to be granular enough to filter out spans to a specific DB? Or perhaps containing an SQL match?
Ben Rice
@Rendrik
I'm looking through adaptive-sampler as well. If I interpret the operations there to match (what I see in APM) as operation name, eg '/test', then it may not work for me as the JDBC operation is simple 'update', or 'query'.
I'm not sure at this point that I can selectively narrow down JDBC sampling
2 replies
alexander-branevskiy
@alexander-branevskiy
Hello guys! We are trying to integrate Kamon metrics based on timers and faced off with problems. Right now we are observing vast performance degradation during load testing. Can you recommend what parameters we can tune to resolve this?
4 replies

we tried to tune this stuff

timer {
auto-update-interval = 10 seconds
lowest-discernible-value = 1
highest-trackable-value = 3600000000000
significant-value-digits = 2
}

alexander-branevskiy
@alexander-branevskiy
the overall picture became better but still insufficient for us
Ben Rice
@Rendrik

I have a standard play application, and the following seems to have no effect at all (even when I set sampler=never)

kamon.metric.trace {
  sampler = "adaptive"
  adaptive-sampler {
    groups {
      aws-checks {
        operations = [
          "\/api\/test",
          "\/test"
        ]
        rules {
          sample = never
        }
      }
    }
  }
}

Everything goes into APM, and I can't seem to filter things out. The config is certainly being read; at least by APM to fetch the API key. This is inside my play config conf/application.conf, and I'm using the standard runner plugin addSbtPlugin("io.kamon" % "sbt-kanela-runner-play-2.8" % "2.0.10") and the bundle dependencies. Any clues why these wouldn't take effect?

"io.kamon" %% "kamon-bundle" % "2.1.18",
 "io.kamon" %% "kamon-apm-reporter" % "2.1.18",
2 replies
Pooriya-Shokri
@Pooriya-Shokri
Hi all,
I've enabled zipkin tracing by the code snippet below in one of my actors.
The actor receives messages from Akka streams socket, so I had to create context and span by myself.
While tracing looks fine in development mode, weird traces are seen in production. As you can see in following pictures, there are traces with hundreds of spans, while my app only consists of some simple actors with no circular message passing. I guess it's a problem of trace-id generation but no evidence on that. Any help appreciated.
1 reply
Screenshot from 2021-05-23 08-54-59.png
MicrosoftTeams-image (1).png
MicrosoftTeams-image.png
alexander-branevskiy
@alexander-branevskiy
any updates here ? kamon-io/Kamon#829
@SimunKaracic you told that someone is working on it
3 replies
Srepfler Srdan
@schrepfler
I'm getting this error on kanela startup, do we need to add some dependencies next to kanela?
Please remove -javaagent from your startup arguments and contact Kanela support.: java.lang.NoClassDefFoundError: redis/clients/jedis/commands/ProtocolCommand
3 replies
schrepfler
@schrepfler:matrix.org
[m]
no worries! is setting the API key mandatory? the docs all seem to have it and I'd be happy with status-page+prometheus+zipkin/jaeger
9 replies
Diego Parra
@dpsoft
Hi all, we just published Kanela 1.0.10 with support for Java 16 and other goodies https://github.com/kamon-io/kanela/releases/tag/v1.0.10
5 replies
jamespass
@jamespass
Hello all, I am currently removing instruments from Kamon using .remove() but after some time I am still seeing the the metric on my applications metric endpoint and subsequently Prometheus. Is there a way to have the metric removed from the metrics endpoint instantly? Thanks
1 reply
Dinesh Narayanan
@ndchandar

Hello,
I am having Context propagation issues when switching between Cats Effect IO and Scala Futures with the newer version (2.2.0). I don't see issues with Cats Effect IO + Default Scala global execution context.

When using Akka Dispatchers I seem to be losing context. Sharing minimal example https://gist.github.com/ndchandar/0c54f348a72308d3abb1741f311c650c
Appreciate your help on this.

1 reply
ramyareddy
@ramyareddy:matrix.org
[m]
1 reply
Hello everyone. I'm experimenting with Kamon 2.0, akka-http/akka and integration with Prometheus. I am not sure why I am getting this error "expected equal, got "INVALID" and showing end point is down Standalone Prometheus instance scraping metrics endpoint shows tartget as down due to this error: expected equal, got "INVALID"
Dominik Guggemos
@dguggemos
Hi, I'm trying out Kamon traces in combination with the W3C context propagation. In my very basic example (create a span, propagate it via W3C context an recreate it, create child span from it) I'm loosing the association to the parent span because when the W3C context is written, the id of the parent span is used instead of the id of the current span itself (see https://github.com/kamon-io/Kamon/blob/master/core/kamon-core/src/main/scala/kamon/trace/SpanPropagation.scala#L110). Is this correct and I misunderstand the concept here?
17 replies
Krisztian Lachata
@lachatak
Hi, good morning. I try to use kamon-datadog in 1 of our services in k8s. I configured it to use module agent and tracer pointing to our datadog agent running on all nodes. I see in the log that Kamon Datadog modules are started however nothing is really sent to the target. I tried to add a fake endpoint only for logging to see what is sent to the target but no data is sent at all. Can you help me to understand what might be the problem. No error, no logs even in TRACE level. Thank you
68 replies
imRentable
@imRentable

Hi, I recently started using kamon-prometheus and noticed that a counter metric always yielded 0 when querying it via the increase or rate function of PromQL. The reason for this was that the corresponding Kamon counter has been initialised just right before incrementing it. Therefore, no initial counter value of 0 has been exported. So I did some research and stumbled across this part of the Prometheus documentation: https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics
It recommends to initialise all metrics before using them. I'd like to do this but it seems very tedious/unrealistic to do it manually by calling every metric with every possible label combination at the start of my application. So, I wonder, is there some utility or configuration for kamon-prometheus that initializes all the metrics (or rather series) automatically so that initial values are exported?

Thx in advance!

5 replies
schrepfler
@schrepfler:matrix.org
[m]
I've noticed this Exception on application start when Kamon is instrumenting the kafka consumer, is this relevant/critical/known?
competitions-service-54cf8bf698-kb2pv competitions-service [application-akka.kafka.default-dispatcher-19] ERROR 2021-06-22 23:55:05  Logger : Error => org.apache.kafka.clients.consumer.KafkaConsumer with message Cannot locate field named groupId for class org.apache.kafka.clients.consumer.KafkaConsumer. Class loader: jdk.internal.loader.ClassLoaders$AppClassLoader@9e89d68: java.lang.IllegalStateException: Cannot locate field named groupId for class org.apache.kafka.clients.consumer.KafkaConsumer
5 replies
schrepfler
@schrepfler:matrix.org
[m]
When using Kamon with Lagom, as we don't control the topics directly, will Kamon know how to add the metadata on the topic
1 reply
Igmar Palsenberg
@igmar
Can I somehow find out why Kamon doesn't export certain metrics ?
1 reply
I know the timer gets started / stopped, but no export. Or sometime it is , sometimes it isn't.
Ben Iofel
@benwaffle
Anybody here have experience using the Datadog Java agent? Seems like we're being forced to switch to it from Kamon because Kamon's metrics count as custom (paid/limited) but Datadog's metrics count as built-in (free) for the same data (e.g. JVM GC count).
4 replies
Nitay Kufert
@nitayk
Hey, trying to upgrade kamon to 2.2.1 and getting this on services that are trying to connect to MySQL:
class com.mysql.jdbc.StatementImpl cannot be cast to class kamon.instrumentation.jdbc.HasDatabaseTags (com.mysql.jdbc.StatementImpl and kamon.instrumentation.jdbc.HasDatabaseTags are in unnamed module of loader 'app')
19 replies
Pankaj
@pankajb23

Hey guys,
we tried kamon-bundle 2.2.0 with scala/ guice/kafka application with proper tracing enabled in logback and also included JavaAgent .enablePlugins(PlayScala, JavaAgent, JavaAppPackaging) in build.sbt
but our trace/span sporadically appears/disappears for the application.

[warn][2021-07-01_14:04:07.083] [undefined|undefined] o.a.k.c.NetworkClient

any pointer people what we might be missing here

5 replies
shataya
@shataya
Hi, is it possible to exclude certain URLs from the Akka HTTP/Play tracing? We are using akka cluster bootstrap and there are many many traces with "/bootstrap/seed-nodes"
1 reply