Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Mar 23 12:54
    ivantopo commented #1094
  • Mar 23 09:34
    ivantopo commented #1094
  • Mar 23 08:43
    ivantopo commented #1094
  • Mar 23 08:42
    ivantopo commented #1144
  • Mar 23 08:36
    ivantopo opened #1147
  • Mar 23 02:34
    mofei100 commented #1144
  • Mar 23 02:33
    mofei100 closed #1144
  • Mar 21 15:36
    seglo edited #1145
  • Mar 21 15:33
    seglo synchronize #1145
  • Mar 21 08:25
    sebarys opened #1146
  • Mar 20 18:16
    sebarys commented #793
  • Mar 20 18:14
    sebarys commented #793
  • Mar 19 23:41
    seglo opened #1145
  • Mar 18 14:33
    cmcmteixeira commented #1127
  • Mar 18 02:42
    mofei100 edited #1144
  • Mar 18 02:22
    mofei100 opened #1144
  • Mar 17 19:48
    hughsimpson commented #1127
  • Mar 17 19:10
    hughsimpson commented #1127
  • Mar 17 11:55
    ivantopo closed #1046
  • Mar 17 11:55
    ivantopo commented #1046
Libor Kramoliš
@liborkramolis_twitter
Hi. Is it possible to exclude selected HTTP endpoints from producing spans? For example, I would like to hide /ready operation because it is called in regular bases by K8s as readiness probe. I do not need to track such operation. Thanks.
1 reply
Sean Glover
@seglo

good day. i'm having an issue where Kamon is generating strange operation names when there exists an endpoint with more than 1 verb implementation. i'm using guardrail to generate akka-http routes from an openapi spec, and then using Kamon's akka-http server integration. i couldn't find any issues that describe a similar problem, but i thought i would see if anyone here has a pointer.

i.e. i have a GET /foo and POST /foo, an operation name of /foo/foo will be generated for the first defined endpoint from my openapi spec. the second defined endpoint operation name seems unaffected..

1 reply
Sean Glover
@seglo
i created a reproducer PR: kamon-io/Kamon#1063
Zhenhao Li
@Zhen-hao
hi, I'm new to kamon. I inherited a codebase at a client of mine which uses kamon with the datadog-api module. I assume sending metrics data via datadog-api is async and not blocking metrics update, since that's the only sensible way. But it would be nice if people with more experience with kamon can confirm that.
1 reply
Akash Nema
@akash-nema-incontact
Hi Everyone
Can someone help me with Kamon-Jaeger Integration. Jaeger is not reading the Kamon spans I have created in my app. Application details:
Scala version: 2.12.13
Jaeger Version: 2.3.1
play framework version: 2.12
akka: 2.6.10
Akash Nema
@akash-nema-incontact
I'm new to kamon. I have referred https://kamon.io/docs/latest/reporters/jaeger/
I can see the generated span traces in kamon status page but not in jaeger.
Shane
@Shailpat3Shane_twitter
I am getting this error after upgrading kamon-bundle to 2.3.1 from 2.2.1
ch.qos.logback.core.joran.spi.JoranException: Problem parsing XML document. See previously reported errors.
        at ch.qos.logback.core.joran.event.SaxEventRecorder.recordEvents(SaxEventRecorder.java:65)
        at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:151)
        at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110)
        at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
        at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75)
        at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150)
        at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
        at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
        at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
        at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
        at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)

Caused by: org.xml.sax.SAXParseException; systemId: jar:file:/Users/shaileshpatil/workspace/xxxx-jar!/logback.xml; lineNumber: 42; columnNumber: 2; The markup in the document following the root element must be well-formed.
        at java.xml/com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1243)
        at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:635)
        at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:324)
        at ch.qos.logback.core.joran.event.SaxEventRecorder.recordEvents(SaxEventRecorder.java:59)
        ... 18 more
Any help will be appreciated
Aditya Maheshwari
@adityamundra

How to get current value of a counter? For example -

Kamon
          .counter("requests.status.404")
          .withTag("Reason", "The requested resource could not be found.")
          .increment()

After calling increment how to check if the value was incremented?

Zvi Mints
@ZviMints

Hey, I'm facing an issue with the following exception:

2021-11-29 17:44:41.710 31 WARN  MacOperatingSystem:365 - Failed syctl call for process arguments (kern.procargs2), process 42643 may not exist. Error code: 22

Dependencies:

  "io.kamon" %% "kamon-core" % "2.4.1",
  "io.kamon" %% "kamon-bundle" % "2.0.6",
  "io.kamon" %% "kamon-prometheus" % "2.0.1",

I'm trying to disable these errors with the following configurations on application.conf:

kamon.modules {
  host-metrics {
    enabled = no
  }
}

But its not working, any ideas?
cc: @ivantopo (:pray:)

Mayank Srivastava
@mayanksriv
Hi @ivantopo !! I have recently implemented Kamon Instrumentation to my play application and expect to log traceIds/SpanIds/Context-keys, etc with logback. All of it is built into a docker image. But I faced very weird issue, the same image when run in a container would sometimes work properly and sometimes just skip logging traceId/spanId/context-key etc. This was fairly intermittent. On turning debug mode on for Kanela, I saw both logback and play-framework modules had order=2 and competed to be loaded. When logback loaded before play, everything worked fine but if it loaded after play that is when we noticed missing logback entries. The way I have worked around is to explicitly give different orderings to both above modules. But is that expected behaviour to have two modules compete to be loaded.
Ivan Topolnjak
@ivantopo
@/all hey folks, this is a reminder that the official Kamon folks are now giving support on Discord and not checking this room at all. Still some other folks from the community might help here, but if you want to ensure we take a look at your question please drop it on Discord or Github. Thanks!
mofei100
@mofei100
com.mysql.jdbc.PreparedStatement with message Class versions V1_5 or less must use F_NEW frames.. Class loader: jdk.internal.loader.ClassLoaders$AppClassLoader@9e89d68: java.lang.IllegalArgumentException: Class versions V1_5 or less must use F_NEW frames.
Anybody see this problem?
1 reply
Gaël Bréard
@gbrd
Hi, I'm trying to build with graalvm/native-image but it does not work, I can seee tthis is supported here : https://github.com/kamon-io/Kamon/releases/tag/v2.3.0 but I did not found a doc.
Without option --allow-incomplete-classpath, my build fails. And with this option, I have at runtime : " ERROR kamon.Init - Failed to attach the Kanela agent included in the kamon-bundle"
  • can/should I use allow-incomplete-classpath ?
  • should I add kamon bundle in my dependencies (and thus classpath) ?
Ivan Topolnjak
@ivantopo
Hey @mofei100 and @gbrd, please come over to the Discord server, we are not looking at this chat very often: https://discord.gg/weHgVmJYNY
pknowles-9
@pknowles-9
Hi. How can we use kamon-testkit with Java ? Are there any examples or documentation ? I'd like to test Kamon metrics. Thank you
Oleksandr Shevchenko
@o-shevchenko

Hey
I have a problem with response B3 headers propagation for Kamon http4s.

    implementation("io.kamon:kamon-core_2.13:2.2.3")
    implementation("io.kamon:kamon-http4s-0.23_2.13:2.2.1")
    implementation("io.kamon:kamon-prometheus_2.13:2.2.3")
    implementation("io.kamon:kamon-jaeger_2.13:2.2.3")

Jaeger export works fine, also I see that B3 headers propagated correctly to my http4s service from a previous one but I don't see B3 headers in the server response. kamon.propagation.http.default.entries.outgoing.span should be b3 by default.

kamon {
  environment {
    service = "service2"
  }

  trace {
    join-remote-parents-with-same-span-id = no
    adaptive-sampler {
      groups {
        metrics {
          operations = ["/metrics"]
          rules {
            sample = never
          }
        }
        healthz {
          operations = ["/healthz"]
          rules {
            sample = never
          }
        }
      }
    }
  }

  prometheus {
    start-embedded-http-server = yes
    embedded-server {
      hostname = "0.0.0.0"
      port = 9095
      metrics-path = "/metrics"
    }
  }

  jaeger {
    host = "jaeger-collector.svc.cluster.local"
    port = 14268
  }
}

I found a couple of issues on Github but looks like it has already been resolved.
Any thoughts on what can be wrong?

1 reply
Zvi Mints
@ZviMints
Hey, i'm trying to integrate with p8s and akka metrics.
https://stackoverflow.com/questions/71941845/is-it-possible-to-use-both-kamon-and-prometheus
Can someone take a look?
Oliver Schrenk
@oschrenk

I don't understand the difference of a Context Tag and a Context Entry

Only tags seem to be able to me merged back to MDC ( see https://kamon.io/docs/latest/instrumentation/logback/#copying-context-to-the-mdc ) and when I look at https://kamon.io/docs/latest/core/context/ tag are not mentioned.

Are they both the same (but historically different)?

ALVARO VILAPLANA GARCIA
@avilaplana
Hi, I am using I have a gauge kamon metric named ws_active_connections that contains 3 tags, tag1 with cardinality 417, tag2 with cardinality 25 and tag3 with cardinality 3.
As you can calculate that metric could have 417 25 3 = 31.275 time series. For that reason I have some concerns:
  1. memory issues due to the amount of time series store internally (I think is a TrieMap)
  2. performance and cardinality due to the amount of calls made to lookup and execute increment/decrement operations
    Please, can you give me some advise?
SShivama
@SShivama
Hello Everyone.
I am new Scala and Akka. I am getting the below issue. Can anyone help out on this??
native/libsigar-universal64-macosx.dylib' (fat file, but missing compatible architecture (have 'unknown,x86_64', need 'arm64e'))
1 reply
Deepika H
@deepika5555

Hey All,
I am using kamon-akka-http to fetch metrics around the requests, I am currently not seeing metrics related to status code 500 instead the count is incremented for 200 status code,
for e.g. span_processing_time_seconds_count{component="akka.http.server", error="false", http_method="POST", http_status_code="200", instance="localhost:9095", job="Case-actor-metrics", operation="/case/v1/cases", span_kind="server"}

even though POST /case/v1/cases gave me 500 http status code, The metrics counter is incremented for 200 status code on the same route and request method. Is this a bug in the library or Am I missing anything here?

Zhenhao Li
@Zhen-hao
hi all, I notice that questions here are rarely answered.
If your question is for work and your company has a budget for paid consulting, I invite you to create a pairing request on https://pairtime.com. If I can't help myself, I will do my best to find the right people to help you.
Damian Albrun
@insdami

Hi guys, I'm struggling to configure kamon to work along with lightbend cinnamon propagation. One case is http propagation, cinnamon puts in the http client headers Cinnamon-MDC which is values encoded in base64. What I'd like to do is accept that optional header an implement a custom http header codec so I can parse it and add it to the context without overriding the kamon ones if that makes sense.

I'm looking to rollout the change from cinnamon to kamon service to service without losing telemetry. In the docs custom codecs are mentioned, but it seems outdated since the last version doesn't have such trait.

I've tried to find examples in the repo but I'm not having any luck so far