by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 07:19
    paascloud starred kamon-io/Kamon
  • Jan 30 2019 20:59
    eugenemiretsky opened #568
  • Jan 30 2019 17:59
    eabizeid starred kamon-io/Kamon
  • Jan 30 2019 06:39
    PDimitryuk starred kamon-io/Kamon
  • Jan 25 2019 15:52
    sjoerdmulder opened #567
  • Jan 23 2019 20:05

    dpsoft on 1.1.5

    (compare)

  • Jan 23 2019 19:56

    dpsoft on v1.1.5

    (compare)

  • Jan 23 2019 19:48

    dpsoft on 1.1.5

    * Update Hdr * lazy vals in Kam… (compare)

  • Jan 22 2019 12:17
    andreas-schroeder opened #566
  • Jan 16 2019 18:34
    nodox starred kamon-io/Kamon
  • Jan 06 2019 14:21
    kencyke starred kamon-io/Kamon
  • Jan 06 2019 12:47
    ivantopo edited #565
  • Jan 06 2019 12:46
    ivantopo synchronize #565
  • Jan 06 2019 12:45
    ivantopo review_requested #565
  • Jan 06 2019 12:45
    ivantopo opened #565
  • Jan 06 2019 11:38

    ivantopo on master

    clean references to scheduled r… (compare)

  • Jan 05 2019 15:20
  • Jan 02 2019 17:02
    l15k4 starred kamon-io/Kamon
  • Dec 29 2018 08:03
    zbd0518 starred kamon-io/Kamon
  • Dec 27 2018 09:17
    ivantopo commented #563
Rajat Khandelwal
@prongs
And does sampling imply that if I try with just one websocket session, it might not even come up in jaeger? I need to try multiple sessions?
Ivan Topolnjak
@ivantopo
no, the Kamon instrumentation will automatically propagate context across actors and futures
regarding sampling, yes
the first Span of the chain takes a sampling decisions and then all related spans just follow the same sampling decision
Rajat Khandelwal
@prongs

Ah. Then logs are a better indicator than jaeger UI, and I might not have an updated picture of what's working and what's not.

So I'll put logging in the whole chain, that way I'll know if/when/where the trace id gets dropped.

Ivan Topolnjak
@ivantopo
yeap
that's the way to go
Rajat Khandelwal
@prongs

Thanks @ivantopo I'm able to verify trace propagation in logs across async boundaries. Not seeing the small individual traces(ones for outound http calls or db calls) in jaeger now (as they are now part of another parent -- the WebSocket one). Not seeing the WebSocket trace in jaeger, but that's because of sampling.

This is what I did in a nutshell (so as to help others):

  • Created a context in the WebSocket actor class -- instance level
  • Created a span out of this context to represent the whole WebSocket session
  • In message handling, Created a child span from the session-level span, put proper tags in it and forwarded messages to worker actors.
  • After the worker actor, context propagation works out of the box.
Nihat Hosgur
@nhosgur
HI @ivantopo , wish to use Kamon reporter with new relic. You guys used to have new relic documentaion for 0.6xx yet don't see reporter for 2xx
10 replies
Yaroslav Derman
@yarosman
Hello. Does anyone have problem with kamon, docker, alpine, ash script, play integration, when metrics contains wrong gc and generation
jvm_gc_seconds_bucket{collector="scavenge",le="+Inf",component="jvm",generation="unknown"} when uses G1 ?
Alexis Hernandez
@AlexITC
@prongs would you share an example? how do you create the child span?, possibly asChildOf is the only way, I was looking for a way to created a span from the parent
1 reply
Alexis Hernandez
@AlexITC
@ivantopo I understand I should be able to get a view like this one from zIpkin on the APM dashboard, but I have no idea how, can you give me some insights please? https://zipkin.io/public/img/web-screenshot.png
abhihub
@abhihub
Is it possible to be assigned to an issue @ivantopo ? This is an issue that NR can fix : kamon-io/Kamon#789. So wanted to be assigned to it so I can track and prioritize it.
Ivan Topolnjak
@ivantopo
@abhihub it seems like we will need to invite you guys to the team and then we can assign
1 reply
Rajat Khandelwal
@prongs
@ivantopo in my websocket use-case, the websocket actors sends one message to a worker thread and the worker actor then sends multiple replies to the WebSocket actor. I'm seeing that context is intact until the first reply, it breaks in the subsequent replies
Rajat Khandelwal
@prongs

I think it's fine in 1:1 request-response cases, but when 1 request has multiple replies -- like a stream -- then you need to resort to context propagation.

Nevertheless, For me, for now, it's fine even without that. I'm creating new spans for new requests from UI, from a parent span. The child spans might close too fast -- giving incorrect information, but the parent span is there for the whole life of the socket.

Rajat Khandelwal
@prongs
Actually, found the case when context propagation breaks: My actor sends periodic tracking messages to itself. This is where it ends up breaking. context propagation doesn't happen. e.g.
context.system.scheduler.scheduleOnce(trackingInterval, self, Track)
Ivan Topolnjak
@ivantopo
oh man
I had this conversation before
I know
Ivan Topolnjak
@ivantopo
I don't know where did I write it down, but I remember having this conversation several times and realizing that we have to instrument all the scheduleOnce calls to keep the same context
4 replies
Rajat Khandelwal
@prongs
any band-aid fix I can do? Other than have the "websocketcontext" propagate down everywhere? Problem is, it's not quite readable when some message handlers will be Kamon.runWithContext, and some won't.
Rajat Khandelwal
@prongs
ended up doing this
  implicit class KamonScheduler(scheduler: akka.actor.Scheduler) {
    final def scheduleOnceWithKamon(delay: FiniteDuration, receiver: ActorRef, message: Any)(
      implicit
      executor: ExecutionContext,
      sender:   ActorRef = Actor.noSender
    ): Cancellable = {
      val ctx = Kamon.currentContext()
      scheduler.scheduleOnce(delay, new Runnable {
        override def run = Kamon.runWithContext(ctx) { receiver ! message }
      })
    }
  }
Franco Albornoz
@dannashirn
Hey everybody, I'm trying to migrate an existing library that uses kamon 1.x to 2.x and am almost done, but I'm struggling with migrating some custom http clients which used the Kamon.withContextKey method, which I believe now should be replaced with preStartHooks, but can't seem to find any documentation about how to use those. Could you maybe point me in the right direction there?
Rajat Khandelwal
@prongs
Hey, is there a config to disable kamon for test cases? I don't need to instrument tests.
3 replies
Rajat Khandelwal
@prongs

image.png

Incorrect "invalid parent span id". Weird behaviour

Rajat Khandelwal
@prongs

Another weird behaviour I see is incorrect ordering of spans in the UI. I do a future call and in the oncomplete I send a message to an actor. The jaeger UI is showing them in reverse order. It shows the message passing was adjusted by (-x seconds), leading to an incorrect order in the UI. And if I add x to that time, it's actually correct -- as in after the future span completion. Due to the adjustment thing it's giving incorrect behaviour.

AFAIK, UI has no way of disabling adjustment.

Ivan Topolnjak
@ivantopo
hey @prongs, that issue with Jaeger.. is it just about the order in which the Spans are shown in the UI or the parent-child relationships are wrong?
Alexey Kiselev
@alexeykiselev

Hello, Kamon devs!
I'm trying to understand how Kamon generates host tag. In local application configuration file I see:

kamon {
  enable = yes
  environment.host = "1"

In base application configuration file:

kamon {
  # Set to "yes", if you want to report metrics
  enable = no

  environment {
    service = "xxx"

So, in InfluxDB for Histogram metrics made by Kamon I see:

host =1
instance=xxx@1
service=xxx

But, if an org.influxdb.Point was written directly using the org.influxdb.InfluxDB driver, the metric also contains tag host that equals to hostname of the machine. By the way, application also reports JVM metrics using kamon-system-metrics. Is it possible that Kamon that is not used to produce some metric adds its host to it?

Arjun Karnwal
@arjunkarnwal

Hello, Kamon devs

I am having some issues about using tapir with akka-http backend together with kamon? I observe problem with resolving operation names in the span metrics, and wonder if there's some workaround ? cc @matwojcik I see you have faced this issue before. Do you find a solution for the same ?

Ivan Topolnjak
@ivantopo
hey @alexeykiselev, what Kamon versions are you using there? some of those names sound like from previous Kamon versions! Regarding the host tag, ideally you will leave kamon.environment.host set to auto and only change it if you really need to. For example, in our own deployments we use an environment variable with the actual name of the host since we are running everything in containers. There are a few settings in the InfluxDB reporter to decide whether you want to set the host tag or not
@arjunkarnwal what exactly is the problem you are seeing?
Arjun Karnwal
@arjunkarnwal
@ivantopo when I use tapir with akka-http backend together with kamon, I observer that span metrics like span_processing_time_seconds_count{operation="http.server.request"} whereas if I dont use Tapir I get span_processing_time_seconds_count{operation="api/v1/mycustomerAPIPath"} . I dont know why but when I use tapir, the operation attribute gets overridden i.e. instead of having the api path, it contain "http.server.request".
5 replies
Alexey Kiselev
@alexeykiselev

hey @alexeykiselev, what Kamon versions are you using there?

It's 2.1.0, but configuration may be from older versions.
We set host to internal node ID to show Kamon histograms in Grafana the same as we show metrics created directly. In latter case we just add tag node with the same ID.
I wonder, is it possible in project where Kamon is used and kamon-system-metrics also used that direct calls to org.influxdb.InfluxDB some how polluted with tags that Kamon set.

2 replies
Red Benabas
@red-benabas
Hi Kamon dev! We're upgrading our SBT native packager project from Kamon 1.X to Kamon 2.X, following the steps in the guide https://kamon.io/docs/latest/guides/migration/from-1.x-to-2.0/. We've added Kanela plugin as well as Kanela agent. When I point project to a zipkin sever running on localhost we can see the traces. However, not on remote zipkin server. Has anyone come across this?
Also, is there a way enable DEBUG level logging in kamon.zipkin?
7 replies
Alexis Hernandez
@AlexITC
Has anyone else integrated kamon on a play server that uses grpc clients generated by scalapb? I'm experienced some weird issues, where some grpc calls get invoked twice with no reason, my belief is that kamon may be the cause (I'm still experimenting)
7 replies
Yaroslav Derman
@yarosman
Hello. Does anyone try to use kamon with zio ?
Ilya
@squadgazzz
Hi! Is it possible to check connection status to metrics db with Kamon?
4 replies
Jakub Kozłowski
@kubukoz
Hi, how could I have Kamon-reported spans include the tags from the context?
10 replies
if that's not possible, I can see I can call .tag on the span directly, but I can't read it that way - or can I?
Khal!l
@redkhalil
Hi kamon aficionados, I would like to get your opinion on this subject matter. I've gotten feedback from a colleague that kamon has issues with the byte code generation, that it could damage your artifact, well in short that it's not reliable. And of course he heard this from colleagues who are not around anymore, so I can't verify this. Anybody has had such an experience before?
5 replies
Daniel van der Ende
@danielvdende
Hi, I'm trying to integrate Kamon with an Akka HTTP app. The docs seem to suggest that it should work out of the box (e.g. add dependency to build.sbt, run Kamon.init()). I can see JVM and other system metrics, but 0 Akka HTTP metrics. Any idea what the problem here could be? Thanks!
34 replies
jogi3778
@jogi3778_twitter

Hi,

we use for our Scala application the Kanela agent 1.0.5. Our application runs with openjdk-14.01 in a docker container.
When the kanela agent is initialized it partially stops.

_  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/

==============================
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.5)
[main] INFO 2020-06-29 12:45:02  Logger : The Module: Executor Service Capture on Submit Instrumentation is disabled
[main] INFO 2020-06-29 12:45:02  Logger : The Module: Akka Remote Instrumentation is disabled
[main] INFO 2020-06-29 12:45:02  Logger : Loading Akka Instrumentation
[main] INFO 2020-06-29 12:45:02  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.EnvelopeInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.SystemMessageInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.RouterInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.ActorInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.ActorLoggingInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.AskPatternInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.EventStreamInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.ActorRefInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.akka_25.DispatcherInstrumentation
[main] INFO 2020-06-29 12:45:03  Logger :  ==> Loading kamon.instrumentation.akka.instrumentations.akka_26.DispatcherInstrumentation
[main] INFO 2020-06-29 12:45:04  Logger : Loading Executor Service Instrumentation
[main] INFO 2020-06-29 12:45:04  Logger :  ==> Loading kamon.instrumentation.executor.ExecutorTaskInstrumentation

Without docker we don't have this problem. With openjdk-13 it also works with docker. Does anyone know this behaviour?

9 replies
Yaroslav Derman
@yarosman

Hello. Does anyone try to use kamon with zio ?

@ivantopo Have you tried to run kamon with zio, because I don't have context propagation and correct span creation ((

Ivan Topolnjak
@ivantopo
no, didn't try it myself
didn't even try ZIO yet
jmendesky
@jmendesky
Hi, is there any interest to allow Kamon's Context to be serialised in HTTP according to the W3C Correlation Context proposal? https://w3c.github.io/correlation-context/
I could see this as a big plus for interoperability in a mixed-tech architecture. It's not standardised yet, but once finalised, would you consider it/would you be open for a PR or does something speak against it in general?
I could see it implemented similar to Trace Context Codecs so that as a user you could choose how to serialise it.
ATM the only difference in how Kamon serialises context and the W3C proposal is that key-value pairs are separated with ; in Kamon and with , in the proposal (apart from the actual name of the header value - which is configurable already)
2 replies
Franco Albornoz
@dannashirn
Hey guys, so I had a shared library with Kamon 1.x that used the SpanCustomizer which I am trying to replace now with a PreStartHook. The thing is I used to have the customizer with a constructor with two parameters which were used to then customize the spans. The thing is PreStartHook needs to have a parameterless constructor, but I'm not sure how I'd be able to customize the spans more dynamically with it.
jmendesky
@jmendesky
Hi all, is there a preferred way to propagate trace contexts and context tags via GRPC? I can see this issue has been around for a while kamon-io/Kamon#616 (assuming this is for akka-grpc) and there is also this repo: https://github.com/nezasa/kamon-akka-grpc. Is there a plan for the future?
Ilya
@squadgazzz

Hello, Kamon dev. Here’s my config https://controlc.com/cf392b75. I can’t send anything to influxdb with Kamon because of

ERROR kamon.influxdb.InfluxDBReporter - Metrics POST to InfluxDB failed with status code [404], response body: {"error":"database not found: \"mydb\””}

error. I can’t understand where did it find mydb? I have db0 in configs.