Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 07:42
    SimunKaracic commented #687
  • 07:41
    SimunKaracic closed #687
  • May 11 19:17
    pnerg commented #687
  • May 11 19:06
    pnerg closed #973
  • May 11 12:57
    SimunKaracic commented #974
  • May 11 12:31
    danischroeter closed #974
  • May 11 12:31
    danischroeter commented #974
  • May 11 10:57

    SimunKaracic on master

    Add resetDistribution method to… Whoops Merge pull request #1017 from S… (compare)

  • May 11 10:57
    SimunKaracic closed #1017
  • May 11 09:18
    SimunKaracic synchronize #1017
  • May 11 09:06
    SimunKaracic opened #1017
  • May 10 12:31

    SimunKaracic on master

    added the possibility to genera… Merge pull request #1011 from p… (compare)

  • May 10 12:31
    SimunKaracic closed #1011
  • May 10 12:31
    SimunKaracic commented #1011
  • May 10 12:15
    SimunKaracic commented #906
  • May 10 12:14
    SimunKaracic commented #906
  • May 10 11:46
    SimunKaracic commented #766
  • May 10 11:39
    alexandru commented #766
  • May 10 11:39
    alexandru commented #766
  • May 10 11:29
    SimunKaracic commented #766
Bartłomiej Wierciński
@bwiercinski
ok i've manually downloaded the missing file from https://repo1.maven.org/maven2/io/netty/netty-transport-native-epoll/4.1.50.Final/netty-transport-native-epoll-4.1.50.Final-linux-x86_64.jar , but it was strange that sbt didn't want to download the file by himself
Ramakrishna Hande
@rhande-mdsol

Hi , I am working with Kamon Zipkin to trace the request. The request has

1) call to database that returns Result as a Monix Task say Task[T]
2) I am using that result to make callls to a different webservice which is of type Future[HttpResponse]
3) Using result from 2, I am making another Database call

Now before step 1 trace_id is present, and it gets lost after step 1 and nothing after gets traced .

Now If I replace 1) by a static list of records instead of a DB call and then tracing happens successfully

"io.kamon" %% "kamon-core" % “2.1.9”,
"io.kamon" %% "kamon-scala-future" % “2.1.9"
"io.kamon" %% "kamon-executors" % “2.1.9”,
"io.kamon" %% "kamon-zipkin" % “2.1.9"
"io.kamon" %% "kamon-logback" % “2.1.9"

Is there any known issue with Monix tasks w.r.t tracing ???

Thanks in advance

Yaroslav Derman
@yarosman
Hello @SimunKaracic @ivantopo what can you say about kamon-io/Kamon#926 ?
4 replies
federico cocco
@federico.cocco_gitlab
Hello. May I ask if there is any documentation available for https://mvnrepository.com/artifact/io.kamon/kamon-cats-io_2.13? Thanks!
2 replies
charego
@charego
Would it be possible for the build to produce a kamon-bundle sources JAR?
1 reply
I mean currently there is one, but it's mostly empty
kamon-bundle-sources.png
With the use of sbt-assembly and shading rules, I'm not sure if it would be easy (or possible), but it would be nice to have!
charego
@charego
Hmm, an old unresolved SO question about the same...
https://stackoverflow.com/questions/25720448/add-sources-to-sbt-assembly
ramohan
@ramohanraju_twitter
Hi Team, I am trying to integrate the Kamon into the Lagom Microservice. At the time of server startup, I am seeing the below error and not able to see the metrics captured in the configured port. Please help me on this.
7 replies
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
2021-03-15T08:22:21.461Z [ERROR][main] [CORR-ID -
] Init.attachInstrumentation 71 - Failed to attach the instrumentation agent
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at kamon.Init.attachInstrumentation(Init.scala:65)
at kamon.Init.attachInstrumentation$(Init.scala:60)
at kamon.Kamon$.attachInstrumentation(Kamon.scala:19)
at kamon.Init.init(Init.scala:36)
at kamon.Init.init$(Init.scala:35)
at kamon.Kamon$.init(Kamon.scala:19)
at kamon.Kamon.init(Kamon.scala)
at com.retisio.arc.account.impl.module.AccountModule.configure(AccountModule.java:38)
at com.google.inject.AbstractModule.configure(AbstractModule.java:61)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:344)
at com.google.inject.spi.Elements.getElements(Elements.java:103)
at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:173)
at com.google.inject.AbstractModule.configure(AbstractModule.java:61)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:344)
at com.google.inject.spi.Elements.getElements(Elements.java:103)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:137)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.google.inject.Guice.createInjector(Guice.java:78)
at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:200)
at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:155)
at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
at play.core.server.ProdServerStart$.start(ProdServerStart.scala:54)
at play.core.server.ProdServerStart$.main(ProdServerStart.scala:30)
at play.core.server.ProdServerStart.main(ProdServerStart.scala)
Caused by: java.lang.IllegalStateException: No compatible attachment provider is available
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.install(ByteBuddyAgent.java:416)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:248)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:223)
at kamon.lib.net.bytebuddy.agent.ByteBuddyAgent.attach(ByteBuddyAgent.java:210)
at kamon.bundle.Bundle$.$anonfun$attach$3(Bundle.scala:50)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at kamon.bundle.Bundle$.withInstrumentationClassLoader(Bundle.scala:104)
at kamon.bundle.Bundle$.attach(Bundle.scala:50)
at kamon.bundle.Bundle.attach(Bundle.scala)
... 29 common frames omitted
2021-03-15T08:22:21.817Z [INFO][main] [CORR-ID -
Paweł Kiersznowski
@pk044

greetings everyone! i'm using Kamon 2.1.3 (kamon-bundle and kamon-datadog dependencies) with Play 2.7.3, I see that Kamon starts up successfully and it records metrics that were created by me, but it doesn't retrieve any span metrics - even though the Datadog span reporter is turned on. I don't see them either on Datadog or Kamon status page.

The span metrics stopped being recorded once I migrated from Kamon 1.x to 2 - it used to work just fine back then. is there anything I should add inside code besides the configuration in application.conf? thanks!

37 replies
Paweł Kiersznowski
@pk044
image.png
SimunKaracic
@SimunKaracic
image.png
Dmitriy Zakomirnyi
@dmi3zkm
Hello team,
After some research in the documentation I didn't find the answer whether kamon is opentracing and/or opentelementry compiliant. Could please advise with that?
2 replies
Declan Neilson
@decyg
morning all, i just saw this: https://blog.gradle.org/jcenter-shutdown, and noticed that io-kamon sbt-kanela-runner is only deployed to bintray (unless i'm missing something). is there any short term intent to deploying this to maven central or similar as well, and if not, is there a migration path documented anywhere?
3 replies
René Vangsgaard
@renevangsgaardjp
Hello, I just configured a Play 2.8 project with Kamon, following the guide on the website. When running behind NGINX I get this error message in the log file.
[error] a.a.RepointableActorRef - Error in stage [kamon.instrumentation.akka.http.ServerFlowWrapper$$anon$1$$anon$2]: requirement failed: HTTP/1.0 responses must not have a chunked entity
java.lang.IllegalArgumentException: requirement failed: HTTP/1.0 responses must not have a chunked entity
    at scala.Predef$.require(Predef.scala:338)
    at akka.http.scaladsl.model.HttpResponse.<init>(HttpMessage.scala:518)
    at akka.http.scaladsl.model.HttpResponse.copyImpl(HttpMessage.scala:565)
    at akka.http.scaladsl.model.HttpResponse.withEntity(HttpMessage.scala:543)
    at kamon.instrumentation.akka.http.ServerFlowWrapper$$anon$1$$anon$2$$anon$5.onPush(ServerFlowWrapper.scala:164)
    at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541)
    at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:495)
    at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
    at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625)
    at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502)
15 replies
René Vangsgaard
@renevangsgaardjp
I am using Kamon with a Play project. Can I disable tracing of specific URL mappings? For example no need to trace assets
1 reply
René Vangsgaard
@renevangsgaardjp
Do the instrumenter support using Netty as Play backend?
1 reply
Dmitriy Zakomirnyi
@dmi3zkm
Hi!
I wonder whether jmx reporter is still supported? There is no Jmx reporter in the main Kamon repository, but there is a dedicated one - kamon-jmx and latest release is 0.6.7 dated Jun 2017.
11 replies
SimunKaracic
@SimunKaracic
New release of kamon is here, featuring new Spring MVC and Spring WebClient support!
Try it out and let us know if you have any issues or feedback!
https://kamon.io/docs/latest/instrumentation/spring/spring-mvc/
Darren Bishop
@DarrenBishop

hi folks. have some questions that have bugged me for a while…

the context: is Monix-Reactive stream processing, where messages are buffered when back-pressure is detected; the source stream is grouped/partitioned (100 groups) and there is a back-pressure-buffer per group
i use a range-sampler to track the size of these buffers, incrementing by 1 as each event goes in, decrementing by the size of the batch of events that is released downstream i.e. when back-pressure is relaxed
also note that there are several threads across several instances running this stream

the questions:

  1. in cloudwatch/grafana/xxx, how best to interpret or represent that metric?
  2. how does charting the metric over a 1s vs 1m vs 5m period/range alter the interpretation?
  3. is it vital to have alignment between the tick-interval in kamon config and the period/range specified in the dashboard
  4. if tick-interval is 1mand the dashboard is set to 5m would a sum aggregation over report/represent the buffer-size i.e. will it be shown to be 5x bigger than it ever actually is?
30 replies
Srepfler Srdan
@schrepfler
Hi all, what's the current state of support for Lagom, I'm not that interested for the hot reload functionality as much as actor processing metrics (persisted actors as well) and ideally tracing on the service endpoints and the api client parts
9 replies
danischroeter
@danischroeter
Hi there
I got an akka actor that manages a queue. The size of the queue is tracked by a RangeSampler using increment/decrement... This works fine normally.
But now when this actor fails and a new queue is created I need to reset this RangeSampler back to 0 when the new actor with a fresh queue is started...
There is no reset() on RangeSampler and I do not know the current value so that I could do a decrement(currentValue).
Any ideas?
5 replies
Sebastien Vermeille
@sebastienvermeille
Hi there, I have a project using springboot + akka + kamon. Can we avoid using aspectjweaver for it to work ? Aspectjweaver cause some issues for us as we try to migrate to java15 now.
Any help is appreciated thank you
Sebastien Vermeille
@sebastienvermeille
ow Kanela is there now :D I'm a bit out of date will try that
50 replies
schrepfler
@schrepfler:matrix.org
[m]
Any thought about the Grafana cloud and integrated logs/metrics/traces stack? Is there any way kamon support all of it?
2 replies
ramohan
@ramohanraju_twitter
Hi Team, Can someone help me out on the Kamon Metric's Akka Actors Grafana dashboards with .json format.
2 replies
Sebastien Vermeille
@sebastienvermeille
Hello guys do you know if it is planned to support java16 in a near future for Kanela ?
5 replies
Dmitriy Zakomirnyi
@dmi3zkm
I wonder if anyone succeeded with Kamon and akka-grpc server side?
Sherif Mohamed
@sherifkandeel

Hello guys, sorry for the silly simple question, our team is trying to massively bump Kamon from 1.x to latest, basically replacing:

def aspectjWeaver(version: String = "1.8.2"): ModuleID = "org.aspectj" % "aspectjweaver" % version % "java-agent"
def kamonCore(version: String = "1.1.6"): ModuleID = "io.kamon" %% "kamon-core" % version
def kamonSystemMetrics(version: String = "1.0.1"): ModuleID = "io.kamon" %% "kamon-system-metrics" % version
def kamonAkka25(version: String = "1.1.2"): ModuleID = "io.kamon" %% "kamon-akka-2.5" % version
def kamonAkkaHttp25(version: String = "1.1.0"): ModuleID = "io.kamon" %% "kamon-akka-http-2.5" % version
def kamonAkkaRemote25(version: String = "1.1.0"): ModuleID = "io.kamon" %% "kamon-akka-remote-2.5" % version
def kamonScala(version: String = "1.0.0-RC4"): ModuleID = "io.kamon" %% "kamon-scala" % version
def kamonPrometheus(version: String = "1.1.1"): ModuleID = "io.kamon" %% "kamon-prometheus" % version
def kamonJdbc(version: String = "1.0.2"): ModuleID = "io.kamon" %% "kamon-jdbc" % version

With simply

def kamonBundle(version: String = "2.1.14"): ModuleID = "io.kamon" %% "kamon-bundle" % version
def kamonPrometheus(version: String = "2.1.14"): ModuleID = "io.kamon" %% "kamon-prometheus" % version

From what I saw I have two questions:

  • Is Kanela agent present alrady with the Kamon bundle?
  • More improtantly, we rely on akka 2.5.22 does the Kamon bump require a more recent dependency?

I am trying to estimate the work required for the bump, I can already see a lot of renamings!! Thanks in advance.

4 replies
rmckeown
@rmckeown
Not sure if anyone has some hints but I am trying to use kanela 1.0.9 jar via java agent with "io.kamon" %% "kamon-bundle" % "2.1.14", "io.kamon" %% "kamon-datadog" % "2.1.14", I am getting traces and system metrics but not getting any of my actor metrics like akka.actor.mailbox-size. Akka version 2.5.23. I currently even have the doomsday wildcard enabled instrumentation{ akka { ask-pattern-timeout-warning = lightweight filters { actors { doomsday-wildcard = on track { includes = ["**"] ...
7 replies
Khalid Reid
@khalidr

Hello. I am having problems with my play framework (2.8.7) reporting spans to new relic. I have it working for my Akka HTTP app but for some reason the play app is only showing Metrics, not Spans. The Kanela agent is set up and I see the log below:

BatchDataSender configured with endpoint https://metric-api.newrelic.com/metric/v1

1 reply
Dmitriy Zakomirnyi
@dmi3zkm
Keep getting /{}/{} operations despite kamon.http-server.default.tracing.operations.mappings and kamon.akka.http.server.tracing.operations.mapping overrides
1 reply
Tobias Eriksson
@tobiaseriksson
is there some article about the performance penalty / impact that Kamon and it's instrumentation has ?
1 reply
so mostly interrested in latency, but also added CPU and Mem needed
David Knapp
@Falmarri
do you have any thoughts or opinions on https://github.com/open-telemetry/opentelemetry-java-instrumentation ? specifically why i should keep using kamon if i have other services other than scala in my ecosystem, and if kamon plans to be an implementation of opentelemetry? i don't believe currently there's native support for the w3c tracing headers
5 replies
jorkzijlstra
@jorkzijlstra

Is there any documentation on the kamon-mongo instrumentation (https://github.com/kamon-io/Kamon/tree/master/instrumentation/kamon-mongo)?

I included it in my application, but nothing is showing up in the status page or metrics. The code is hitting breakpoint in the MongoClientInstrumentation class so I think its executing

28 replies
Jakub Spręga
@cslysy

Hi Guys,

We are migrating from Kamon '0.6' to '2.1.15'. Everything works fine despite one remaining issue. We noticed that besides our own counters, Kamon starts to report its internal counter 'kamon.trace.sampler.decisions'.
Despite setting 'kamon.trace.sampler' to 'never', Kamon still report it with value equal to 0. The question is, is it possible to configure Kamon in a way that it will not report this counter?

2 replies
Sherif Mohamed
@sherifkandeel

Hi guys, after upgrading form 1.x to 2.x we seem to be missing two metrics jvm.gc.promotion that seems to be empty:

# HELP jvm_gc_promotion_bytes Tracks the distribution of promoted bytes to the old generation regions after a GC
# TYPE jvm_gc_promotion_bytes histogram

And host_context_switches seems to be not there at all.

Did I miss something during migration?

I enabled all metrics

    host-metrics {
      enabled = yes
    }

    process-metrics {
      enabled = yes
    }

    jvm-metrics {
      enabled = yes
      }

    prometheus-reporter {
      enabled = true
    }
4 replies
Henry
@hygt
Hello, Kanela doesn't run on Java 16+ because the embedded version of Byte Buddy is too old. I thought to myself, how hard could bumping a few versions be, let's create a PR... but I quickly realized the codebase isn't super approachable. :sweat_smile:
3 replies
David Knapp
@Falmarri
is starting the jvm with the kamon agent different than attaching it at runtime? are there features that are required to start with the agent? or is just that anything that happens before it attaches won't get instrumented (that seems obvious)
5 replies
Sherif Mohamed
@sherifkandeel
How come the Kamon-datadog is archived repo? am I looking in the wrong place? https://github.com/kamon-io/kamon-datadog
1 reply
Sherif Mohamed
@sherifkandeel
Hi guys, kind of a silly question, I would like to start using Kamon-datadog for reporting metrics to datadog APM. I couldn't find a way to add similar Environment variables, or system properties (e.g. dd.env or dd.version)
4 replies
Mr. Follower
@MrFollo49718686_twitter

hi all!

whilst migrating from spray to akka http, we started 2 servers side by side: one with spray, one with akka http having some functionality migrated off spray, both behind nginx, so that it looked like one server. both spray (1.3.3) & akka http (10.0.15) services used the same akka system.

the problem that for one of the counters sitting only in the akka http router

    val normals = Kamon.metrics.counter("response.normal")

the count apparently was doubled compared to datadog stats before the deployment. kamon is version 0.6.6. all the other stats looked normal. I cannot figure out why that could be the case. would appreciate any pointers!

5 replies
David Knapp
@Falmarri
is it possible to filter out spans from being reported? i'm using the newrelic reporter, and it's reporting spans for sending the metrics/spans to newrelic's http API since it's implemented on okhttp
7 replies
David Knapp
@Falmarri

another question. when running my app under kamon, i'm getting this error

java.lang.VerifyError: Expecting a stackmap frame at branch target 20 Exception Details: Location: org/sqlite/jdbc3/JDBC3Statement.executeBatch()[I @17: goto Reason: Expected stackmap frame at this location.

and then a bunch of bytecode. it's definitely kamon related, since it doesn't happen when running without kamon. it happens when creating a new org.sqlite.jdbc4.JDBC4Connection object. is it possible it's an issue that i'm compiling/running on java8?

20 replies
moznion
@moznion_twitter

Hello folks.

I'm finding a way to measure the response time of the WSClient's requests by Kamon.
I posted a question about this on stackoverflow, does anybody know this?

2 replies
Ben Plommer
@bplommer
Is Scala 3 support on the roadmap? I opened a PR for the core modules (kamon-io/Kamon#1002) which was actually pretty straightforward, but the CI seems to have issues with sbt 1.5.0
1 reply
David Knapp
@Falmarri
i filed an issue for the incorrect bytecode i mentioned above kamon-io/Kamon#1008 let me know if you have any thoughts, because i'm getting pushback from my team when adding -noverify to the process because of potential security concerns. otherwise i have to disable my sqlite instrumentation