Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 26 18:48
    ihostage synchronize #760
  • Oct 26 18:46
    ihostage synchronize #760
  • Oct 26 12:47
    nvollmar closed #1066
  • Oct 26 12:47
    nvollmar commented #1066
  • Oct 26 12:44
    nvollmar commented #1066
  • Oct 26 12:33
    ivantopo commented #1066
  • Oct 26 12:28
    nvollmar commented #1066
  • Oct 26 12:28
    nvollmar commented #1066
  • Oct 26 12:07
    nvollmar commented #1066
  • Oct 26 11:59
    ivantopo commented #1066
  • Oct 26 11:40
    nvollmar opened #1066
  • Oct 26 06:43

    ivantopo on master

    run release actions on our self… (compare)

  • Oct 26 06:05

    ivantopo on v2.3.1

    (compare)

  • Oct 26 06:00

    ivantopo on master

    Ensure scheduler startup in Pla… (compare)

  • Oct 26 06:00
    ivantopo closed #1065
  • Oct 25 15:47
    ivantopo edited #1065
  • Oct 25 15:45
    ivantopo opened #1065
  • Oct 21 12:34

    ivantopo on v2.3.0

    remove bintray variables and cs… (compare)

  • Oct 21 12:34

    ivantopo on master

    remove bintray variables and cs… (compare)

  • Oct 21 11:27
    dpsoft commented #1060
Ivan Topolnjak
@ivantopo
I will most likely fix it later today
John Watson
@jkwatson
ok, will do! thanks!
Ivan Topolnjak
@ivantopo
:thumbsup:
John Watson
@jkwatson
Ivan Topolnjak
@ivantopo
thanks!
moriyasror
@moriyasror
@ivantopo I run my service locally via intellij and send several http requests via postman.
I see the 'http.server.connection.lifetime' metric only once, even though I see other metrics like http.server.requests
moriyasror
@moriyasror
Hi, I mange to run kamon from my intellij, but failed to do it on a server because of missing JDK.
My question is why I must have JDK? on the production we have only JRE, the JDK is only from development.
Is there a way to run kamon using JRE only?
Ivan Topolnjak
@ivantopo
yes, you will have to use the -javaagent way of setting up the agent instead of Kamon.init()
what kind of application were you trying to instrument?
moriyasror
@moriyasror
@ivantopo I instrument akka-http.
can you explain more about the -javaagent option?
Ivan Topolnjak
@ivantopo
sure.. so, I'm guessing that what you did right now was to add the kamon-bundle and call Kamon.init() when application starts, correct?
moriyasror
@moriyasror
@ivantopo Generally yes, I also change the configuration to get only akka-http metrics, and warp the Kamon.init() to run base on configuration (to be able to disable it in case of issue)
My code is written in scala, I saw some plugin which can add the agent, I will try to use them.
If I remove the Kamon.init(), from my code, will I be able to control the kamon running and disable it?
Ivan Topolnjak
@ivantopo
you will still need to have the Kamon.init call in your code because that's what starts all the reporters, status page and so on
the best way to add the -javaagent option would be using the sbt-javaagent plugin, we mention it at the end of this page: https://kamon.io/docs/latest/guides/installation/setting-up-the-agent/
when you start your application with the agent, by the time Kamon.init is executed the agent is already there so there is no need to attach anything in runtime and no need to have a JDK
moriyasror
@moriyasror
@ivantopo thanks! I mange to run it via the plugin.
Guillaume Noireaux
@gnoireaux

Hello,

What is the recommended way to propagate context manually?
I'd like to follow a process through multiple services (Play Scala). They exchange messages via rabbitmq.
Would it be as simple as using AMQP headers the same way HTTP headers are used? How to do that?

Can I use something like https://github.com/opentracing-contrib/java-rabbitmq-client with Kamon and then Jaeger export?

Thanks for your patience!

5 replies
armelouche
@armelouche
Hello, (thank you for you work first), are there any basic Grafana dashboards available for Kamon 2 like there was Kamon 1 (jvm..) ?
Ivan Topolnjak
@ivantopo
@armelouche we don't have any "official" dashboards but there are a few created by folks around the globe on the Grafana site: https://grafana.com/grafana/dashboards?direction=asc&orderBy=name&search=kamon
1 reply
Sergey Morgunov
@ihostage
@ivantopo Please, any reaction to kamon-io/Kamon#760 :pray:
Tell me please, what I need to do for this PR will be merged? :joy:
Ivan Topolnjak
@ivantopo
hey man
:sob:
Sergey Morgunov
@ihostage
:wave:
Ivan Topolnjak
@ivantopo
things have been really crazy on the other side of the job
and man, every single time I promise to check something with a time in mind I fail, the best intentions not always align with what the world has planned for me :joy:
Sergey Morgunov
@ihostage

things have been really crazy on the other side of the job

Me too :joy: But I believe, that sooner or later we can do that together :joy:

Ivan Topolnjak
@ivantopo
yeap, I really appreciate your persistence man, it is really motivating to see folks who care!
Sergey Morgunov
@ihostage
:+1: :wink:
Time to time I will be ping you and I will not fall behind. :joy:
Ivan Topolnjak
@ivantopo
:joy: thanks!
Arsene
@Tochemey
@ihostage quick one is your PR able to help instrument a lagom base application? We had some issue when doing instrumentation in lagom using kamon.
Sergey Morgunov
@ihostage
@Tochemey It's just the first step — implementation of Lagom Circuit Breakers Metrics SPI.
Still, it not fixed a problem with using Kamon in Lagom DEV-Mode. I will try to fix that in the future when I find time for that :smile:
Arsene
@Tochemey
@ihostage :smiley:
Ivan Topolnjak
@ivantopo
the issue of lagom in dev mode is way more complicated because in that case there are (potentially) several different services running on the same JVM
together with SBT and all its stuff
:/
currently Kanela applies all instrumentation on the entire JVM and yeah, we have some filters to prevent touching some classloaders
but that is not enough to account for the fact that some of the services might have Kamon on their classpaths and some others not
plus, Kamon itself doesn't allow for several services on the same JVM
in fact, there has never been a need for it, only for Lagom in dev mode
Sergey Morgunov
@ihostage
Yep :smile:
Ghost
@ghost~5cc594f1d73408ce4fbee018
Hi @ivantopo, for the SQS queue, I’ve opted to manually create a span and pass within the objects that I pass down stream and do something as follows
private val deserializeAndProcessStage = Flow[SpanContext[SqsQueueMessage]]
    .via(killSwitch.flow)
    .throttle(throttleTotal, 1.second, throttleTotal, ThrottleMode.shaping)
    .mapAsync(parallelism) {
      case SpanContext(span, SqsQueueMessage(q, message)) =>
        Kamon.runWithSpan(span) {
          val parsedMsg = ArchonPayload.parse(message)
          logger.debugWithData(s"Message parsed", Map("message" -> parsedMsg.toString))
          parsedMsg match {
            case Right(mdsolUri) =>
              processMessage(mdsolUri.resourceUri)
                .map(Right(_))
                .recover {
                  case NonFatal(th) =>
                    Left(th)
                }
                .map(res => SpanContext(span, SqsQueueMessageResult(q, message, res)))

            case Left(parsingError) =>
              logger.errorWithData(s"An Error has occurred while ", parsingError.th, Map.empty)
              Future.successful(SpanContext(span, SqsQueueMessageResult(q, message, Right(parsingError))))
          }
        }
    }
and as for the span that’s been waiting for 17 days, it happens on my local machine with a new start of the app connecting to a locally run open-zipkin docker machine
Zvi Mints
@ZviMints

Hey all, i have a problem with Kamon and Datadog
This is my configurations:

# Monitoring
kamon {
  datadog {
    flush-interval = 10 seconds
    hostname = datadog
    port = 8125
    application-name = ${?app.name}
    time-units = ms
    memory-units = mb

    subscriptions {
          system-metric = [ ]
          http-server = [ ]

    }
  }
  metric {
    tick-interval = 10 seconds
    track-unmatched-entities = no
    filters.trace.includes = []
  }
}

I connected to some Amazon Queues and Read messages from there, for each message that i insert to database I'm do:

Kamon.metrics.counter(s"message_pulled", Map("queueName" -> queueName)).increment()

BUT I'm getting that i have 100 messages in database for 12:20 for example but in datadog logs i have only 3, how its possible?
Thanks!

Paul Bernet
@pbernet
Can I expose data via JMX with the new "kamon-bundle" "2.1.0"
Corey Caplan
@coreycaplan3
Hey all, is there a way to fix the "parent is missing" warning in Play Framework operations? It pretty much ruins tracing operations because you only get a fraction of the view into the HTTP request.
Oto Brglez
@otobrglez

Hey guys! Long time no see,... :)

I'm trying to use Kamon (kamon-bundle, kamon-akka, kamnon-prometheus, kamon-status-page and kamon-amp-reporter) - version "2.1.0" on top of Scala 2.13 with Java 11. (openjdk version "11.0.2" 2019-01-15, OpenJDK Runtime Environment 18.9 (build 11.0.2+9), OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)). When I add kanela agent (1.0.5) and with very little bootup code in my Scala "App" class I get following error. Any ideas? Is this "my problem" or Kamon / Kamon bundle / AMP issue?

OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended

 _  __                _        ______
| |/ /               | |       \ \ \ \
| ' / __ _ _ __   ___| | __ _   \ \ \ \
|  < / _` | '_ \ / _ \ |/ _` |   ) ) ) )
| . \ (_| | | | |  __/ | (_| |  / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/

==============================
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.5)

12:24:48.341 [main] INFO  kamon.status.page.StatusPage - Status page started on http://0.0.0.0:5266/
12:24:48.734 [main] INFO  kamon.apm - Starting the Kamon APM Reporter. Your service will be displayed as [FeederApp] at https://apm.kamon.io/
12:24:49.544 [main] INFO  kamon.prometheus.PrometheusReporter - Started the embedded HTTP server on http://0.0.0.0:9095
12:24:49.841 [main] INFO  i.c.k.s.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values: 
  bearer.auth.token = [hidden]
  proxy.port = -1
  schema.reflection = false
  auto.register.schemas = true
  max.schemas.per.subject = 1000
  basic.auth.credentials.source = URL
  value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
  schema.registry.url = [http://localhost:8081]
  basic.auth.user.info = [hidden]
  proxy.host = 
  schema.registry.basic.auth.user.info = [hidden]
  bearer.auth.credentials.source = STATIC_TOKEN
  key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

java.lang.ClassCastException: class scala.util.Success cannot be cast to class kamon.instrumentation.context.HasContext (scala.util.Success and kamon.instrumentation.context.HasContext are in unnamed module of loader 'app')
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1(FutureChainingInstrumentation.scala:134)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.$anonfun$exit$1$adapted(FutureChainingInstrumentation.scala:134)
  at scala.Option.foreach(Option.scala:437)
  at kamon.instrumentation.futures.scala.CleanContextFromSeedFuture$.exit(FutureChainingInstrumentation.scala:134)
  at scala.concurrent.Future$.<clinit>(Future.scala:515)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$MetricsCollectionTask.schedule(ProcessMetricsCollector.scala:61)
  at kamon.instrumentation.system.process.ProcessMetricsCollector$$anon$1.run(ProcessMetricsCollector.scala:40)
  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
  at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
  at java.base/java.lang.Thread.run(Thread.java:832)
[ERROR] [05/13/2020 12:24:50.987] [FeederApp-akka.actor.internal-dispatcher-6] [akka://FeederApp/system/IO-TCP/selectors] null
akka.actor.ActorInitializationException: akka://FeederApp/system/IO-TCP/selectors/$a: exception during creation
2 replies
Hm,... might be that I've put Kamon.init() after ActorSystem() 🤔If I swap that around I get this warning,...
Oto Brglez
@otobrglez
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by kamon.instrumentation.executor.ExecutorInstrumentation$ (file:/Users/otobrglez/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/io/kamon/kamon-bundle_2.13/2.1.0/kamon-bundle_2.13-2.1.0.jar) to field java.util.concurrent.Executors$DelegatedExecutorService.e
WARNING: Please consider reporting this to the maintainers of kamon.instrumentation.executor.ExecutorInstrumentation$
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Oto Brglez
@otobrglez
What does this WARNING mean? Should I be worried? :)