by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 01:01
    ryota0624 synchronize #863
  • Sep 28 20:57
    lucasamoroso synchronize #854
  • Sep 28 00:36
    ryota0624 opened #863
  • Sep 26 14:29
    lucasamoroso synchronize #854
  • Sep 25 13:14
    SimunKaracic synchronize #859
  • Sep 25 11:58
    SimunKaracic commented #862
  • Sep 25 11:54
    SimunKaracic commented #859
  • Sep 25 11:52
    SimunKaracic commented #859
  • Sep 25 11:36
    SimunKaracic synchronize #859
  • Sep 25 11:34
    SimunKaracic ready_for_review #859
  • Sep 25 11:34
    SimunKaracic synchronize #859
  • Sep 25 09:29
    SimunKaracic opened #862
  • Sep 25 09:14
    SimunKaracic synchronize #859
  • Sep 25 08:43
    SimunKaracic synchronize #859
  • Sep 25 08:11
    SimunKaracic edited #859
  • Sep 25 08:11
    SimunKaracic synchronize #859
  • Sep 25 08:07
    SimunKaracic synchronize #859
  • Sep 25 07:54
    SimunKaracic synchronize #859
  • Sep 25 07:51
    SimunKaracic edited #859
  • Sep 25 07:50
    SimunKaracic synchronize #859
Ivan Topolnjak
@ivantopo
:joy: thanks!
Arsene
@Tochemey
@ihostage quick one is your PR able to help instrument a lagom base application? We had some issue when doing instrumentation in lagom using kamon.
Sergey Morgunov
@ihostage
@Tochemey It's just the first step — implementation of Lagom Circuit Breakers Metrics SPI.
Still, it not fixed a problem with using Kamon in Lagom DEV-Mode. I will try to fix that in the future when I find time for that :smile:
Arsene
@Tochemey
@ihostage :smiley:
Ivan Topolnjak
@ivantopo
the issue of lagom in dev mode is way more complicated because in that case there are (potentially) several different services running on the same JVM
together with SBT and all its stuff
:/
currently Kanela applies all instrumentation on the entire JVM and yeah, we have some filters to prevent touching some classloaders
but that is not enough to account for the fact that some of the services might have Kamon on their classpaths and some others not
plus, Kamon itself doesn't allow for several services on the same JVM
in fact, there has never been a need for it, only for Lagom in dev mode
Sergey Morgunov
@ihostage
Yep :smile:
Ali Ustek
@austek
Hi @ivantopo, for the SQS queue, I’ve opted to manually create a span and pass within the objects that I pass down stream and do something as follows
private val deserializeAndProcessStage = Flow[SpanContext[SqsQueueMessage]]
    .via(killSwitch.flow)
    .throttle(throttleTotal, 1.second, throttleTotal, ThrottleMode.shaping)
    .mapAsync(parallelism) {
      case SpanContext(span, SqsQueueMessage(q, message)) =>
        Kamon.runWithSpan(span) {
          val parsedMsg = ArchonPayload.parse(message)
          logger.debugWithData(s"Message parsed", Map("message" -> parsedMsg.toString))
          parsedMsg match {
            case Right(mdsolUri) =>
              processMessage(mdsolUri.resourceUri)
                .map(Right(_))
                .recover {
                  case NonFatal(th) =>
                    Left(th)
                }
                .map(res => SpanContext(span, SqsQueueMessageResult(q, message, res)))

            case Left(parsingError) =>
              logger.errorWithData(s"An Error has occurred while ", parsingError.th, Map.empty)
              Future.successful(SpanContext(span, SqsQueueMessageResult(q, message, Right(parsingError))))
          }
        }
    }
and as for the span that’s been waiting for 17 days, it happens on my local machine with a new start of the app connecting to a locally run open-zipkin docker machine
Zvi Mints
@ZviMints

Hey all, i have a problem with Kamon and Datadog
This is my configurations:

# Monitoring
kamon {
  datadog {
    flush-interval = 10 seconds
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
    time-units = ms
    memory-units = mb

    subscriptions {
          system-metric = [ ]
          http-server = [ ]

    }
  }
  metric {
    tick-interval = 10 seconds
    track-unmatched-entities = no
    filters.trace.includes = []
  }
}

I connected to some Amazon Queues and Read messages from there, for each message that i insert to database I'm do:

Kamon.metrics.counter(s"message_pulled", Map("queueName" -> queueName)).increment()

BUT I'm getting that i have 100 messages in database for 12:20 for example but in datadog logs i have only 3, how its possible?
Thanks!

Paul Bernet
@pbernet
Can I expose data via JMX with the new "kamon-bundle" "2.1.0"
Corey Caplan
@coreycaplan3
Hey all, is there a way to fix the "parent is missing" warning in Play Framework operations? It pretty much ruins tracing operations because you only get a fraction of the view into the HTTP request.
Oto Brglez
@otobrglez

Hey guys! Long time no see,... :)

I'm trying to use Kamon (kamon-bundle, kamon-akka, kamnon-prometheus, kamon-status-page and kamon-amp-reporter) - version "2.1.0" on top of Scala 2.13 with Java 11. (openjdk version "11.0.2" 2019-01-15, OpenJDK Runtime Environment 18.9 (build 11.0.2+9), OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)). When I add kanela agent (1.0.5) and with very little bootup code in my Scala "App" class I get following error. Any ideas? Is this "my problem" or Kamon / Kamon bundle / AMP issue?

OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended

 _  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/

==============================
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.5)

12:24:48.341 [main] INFO  kamon.status.page.StatusPage - Status page started on http://0.0.0.0:5266/
12:24:48.734 [main] INFO  kamon.apm - Starting the Kamon APM Reporter. Your service will be displayed as [FeederApp] at https://apm.kamon.io/
12:24:49.544 [main] INFO  kamon.prometheus.PrometheusReporter - Started the embedded HTTP server on http://0.0.0.0:9095
12:24:49.841 [main] INFO  i.c.k.s.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values: 
  bearer.auth.token = [hidden]
  proxy.port = -1
  schema.reflection = false
  auto.register.schemas = true
  max.schemas.per.subject = 1000
  basic.auth.credentials.source = URL
  value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
  schema.registry.url = [http://localhost:8081]
  basic.auth.user.info = [hidden]
  proxy.host = 
  schema.registry.basic.auth.user.info = [hidden]
  bearer.auth.credentials.source = STATIC_TOKEN
  key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

java.lang.ClassCastException: class scala.util.Success cannot be cast to class kamon.instrumentation.context.HasContext (scala.util.Success and kamon.instrumentation.context.HasContext are in unnamed module of loader 'app')
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1(FutureChainingInstrumentation.scala:134)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1$adapted(FutureChainingInstrumentation.scala:134)
  at scala.Option.foreach(Option.scala:437)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.exit(FutureChainingInstrumentation.scala:134)
  at scala.concurrent.Future$.<clinit>(Future.scala:515)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$MetricsCollectionTask.schedule(ProcessMetricsCollector.scala:61)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$$anon$1.run(ProcessMetricsCollector.scala:40)
  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
  at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
  at java.base/java.lang.Thread.run(Thread.java:832)
[ERROR] [05/13/2020 12:24:50.987] [FeederApp-akka.actor.internal-dispatcher-6] [akka://FeederApp/system/IO-TCP/selectors] null
akka.actor.ActorInitializationException: akka://FeederApp/system/IO-TCP/selectors/$a: exception during creation
2 replies
Hm,... might be that I've put Kamon.init() after ActorSystem() 🤔If I swap that around I get this warning,...
Oto Brglez
@otobrglez
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by kamon.instrumentation.executor.ExecutorInstrumentation$ (file:/Users/otobrglez/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/kamon/kamon-bundle_2.13/2.1.0/kamon-bundle_2.13-2.1.0.jar) to field java.util.concurrent.Executors$DelegatedExecutorService.e
WARNING: Please consider reporting this to the maintainers of kamon.instrumentation.executor.ExecutorInstrumentation$
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Oto Brglez
@otobrglez
What does this WARNING mean? Should I be worried? :)
Barnabás Oláh
@stsatlantis

Hello all,
I'm using kamon-http4s on the client side, but I don't see the metrics propagated for any of the endpoints. However I got all the other metrics.
How I'm using it:

import kamon.http4s.middleware.client.KamonSupport
BlazeClientBuilder[F](ec).resource.map(KamonSupport(_)).use{ client =>
  //application code is here
}

When I'm debugging I can see the spans are created, but there is this do-not-sample value. Is it the problem? On the other hand, how can I change to it to make it sample? Sorry for the silly question.

Oto Brglez
@otobrglez
Guys. I have another question. This is a very strange one.
Error:(60, 26) value increment is not a member of kamon.metric.Metric.Counter
Kamon.counter("x").increment()
7 replies
What am I missing here,... I have import kamon.Kamon at the top of my app; and I have kamon-bundle, kamon-core,... and Kamon.init works...
moriyasror
@moriyasror
Hi all, which version is recommended to use in production, those v2.1.0 stable enough? which versions you use?
Diego Parra
@dpsoft
Hi @/all, we want to share two awesome grafana + prometheus dashboards created by @cspinetta , enjoy!
Jakub Kozłowski
@kubukoz
when would I use an entry vs a tag?
usagiy
@usagiy
Hi,

I would like to disable following headers from my requests

X-B3-Sampled: 1
X-B3-SpanId: c45fa42fbb6e01db
X-B3-TraceId: 19f91af53ab25fb2

I have following settings in my conf file:

kamon.instrumentation.akka.http.client.propagation.enabled = no
kamon.instrumentation.akka.http.server.propagation.enabled = no

But this doesnt have effect.
How to disable B3 headers ?

I am using kamon-bundle 2.1.0

Thanks

John Watson
@jkwatson
Is the source available for these demo services somewhere? https://apm.kamon.io/demo/demo/services?from=1589819640&to=1589825040&minutes=90
Ivan Topolnjak
@ivantopo
good evening folks :wave:
Ivan Topolnjak
@ivantopo
@jkwatson the actual version that is running up there is in a private repo but last year I extracted them for a workshop: https://github.com/ivantopo/beescala-workshop/tree/master/automatic
8 replies
@usagiy those settings look right to me.. have you tried maybe using kamon.instrumentation.http-client.default.propagation.enabled?
Ivan Topolnjak
@ivantopo
@kubukoz I would say, if a String/Long/Boolean is enough for your use case then go with tags. Kamon automatically knows how to propagate tags across HTTP and binary channels. It is all handled out of the box. Entries allow you to put any type you want in the context, but out of the box it will only be propagated in the same process. You will need to create an entry reader/writer if you want any custom entry to be propagated through HTTP and binary channels.
I wish we could somehow unify all of that.. ultimately they are just key/value pairs
and most of the time people just want to use the tags.. only we bother with the entries for the Span
@moriyasror we are using 2.1.0 in production, I would recommend you do the same! We usually test new versions in our staging servers before we publish
Ivan Topolnjak
@ivantopo
we don't use all the available instrumentation (for example, we don't use Mongo) but usually it works fine
^^ I know that "usually" doesn't sound very reassuring but I can say we didn't have any problems so far :)
Jakub Kozłowski
@kubukoz
okay @ivantopo, I see. Thanks!
Srepfler Srdan
@schrepfler
oh, Kamon on Lagom would be super cool, subscribing to kamon-io/Kamon#760
Sergey Morgunov
@ihostage
@schrepfler :+1: But need to wait until @ivantopo takes this module to Kamon family :joy:
Srepfler Srdan
@schrepfler
well, it's obligatory now :D
usagiy
@usagiy

@ivantopo unfortunately it doesn't help
this is my configuration

kamon.instrumentation.akka {
      ask-pattern-timeout-warning = heavyweight

      http {
        server {
          propagation {
            enabled = no
            channel = default
          }
          metrics {
            enabled = yes
          }
          tracing {
            enabled = yes
            span-metrics = on
            response-headers {
              trace-id = "X-Correlation-ID"
            }
          }
        }


        client {
          propagation {
            enabled = no
          }
          metrics {
            enabled = yes
          }
          tracing {
            enabled = yes

          }
        }
      }

kamon.instrumentation.http-client.default.propagation.enabled = no

but I still have headers

X-B3-Sampled: 1
X-B3-SpanId: 88d1ac6d6ac48d84
X-B3-TraceId: 72491d0bf36c6cbd
usagiy
@usagiy
Thanks @ivantopo it works now. I had duplicated dependencies, don't know how is this connected
Ivan Topolnjak
@ivantopo
@usagiy what do you mean by duplicate dependencies? maybe having both the bundle and the akka-http dependency?
VarunVats9
@VarunVats9
Can kamon run with openJdk-11 ?
And can it tum with jdk-8 version ?
usagiy
@usagiy
@ivantopo yes, I did it for experiment
usagiy
@usagiy

@ivantopo just one comment, suggestion
on http server tracing you have covered

 onType("akka.http.scaladsl.HttpExt")
    .advise(method("bindAndHandle"), classOf[HttpExtBindAndHandleAdvice])

use case when we used api

bindAndHandle(
    handler:   Flow[HttpRequest, HttpResponse, Any],
    interface: String, port: Int = DefaultPortForProtocol,
    connectionContext: ConnectionContext = defaultServerHttpContext,
    settings:          ServerSettings    = ServerSettings(system),
    log:               LoggingAdapter    = system.log)

but on some other places we used

IncomingConnection. handleWith[Mat](handler: Flow[HttpRequest, HttpResponse, Mat])(implicit fm: Materializer)

handler: Flow[HttpRequest, HttpResponse, Any] was same in both apis so it was trivial to change to bindAndHandle but it was confusing that one case worked and other not. It seems that would be same for you to cover IncomingConnection as handleWith, maybe I'm wrong