Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Mar 23 12:54
    ivantopo commented #1094
  • Mar 23 09:34
    ivantopo commented #1094
  • Mar 23 08:43
    ivantopo commented #1094
  • Mar 23 08:42
    ivantopo commented #1144
  • Mar 23 08:36
    ivantopo opened #1147
  • Mar 23 02:34
    mofei100 commented #1144
  • Mar 23 02:33
    mofei100 closed #1144
  • Mar 21 15:36
    seglo edited #1145
  • Mar 21 15:33
    seglo synchronize #1145
  • Mar 21 08:25
    sebarys opened #1146
  • Mar 20 18:16
    sebarys commented #793
  • Mar 20 18:14
    sebarys commented #793
  • Mar 19 23:41
    seglo opened #1145
  • Mar 18 14:33
    cmcmteixeira commented #1127
  • Mar 18 02:42
    mofei100 edited #1144
  • Mar 18 02:22
    mofei100 opened #1144
  • Mar 17 19:48
    hughsimpson commented #1127
  • Mar 17 19:10
    hughsimpson commented #1127
  • Mar 17 11:55
    ivantopo closed #1046
  • Mar 17 11:55
    ivantopo commented #1046
Jacob Wang
@jatcwang
You can do a universal:packageBin and unpack the resulting zip to confirm that the agent jar is packaged and used in the start script
Red Benabas
@red-benabas
Hey all, has anyone had experience trying to use Kamon instrumentation with AWS Java Lambda functions? In particular, I'm interested in the feasibility of attaching the kanela instrumentation agent in the context of AWS Lambdas
Maxim
@sankemax
Hi, is there an example of using kamon context version 1.xx with akka http rotes? I can't find it anywhere. Thanks :)
extractRequestContext.flatMap { requestContext =>
      val traceHeaderOp = requestContext.request.headers.find(_.name().toLowerCase == "x-salt-trace").map(_.value())
      Kamon.withContext(Context(TrafficTracing.ContextKey, traceHeaderOp)) {
        trace("checking")
        mapRouteResult { result =>
          trace(s"[$serviceName]: request processing finished")
          log.info(s"[$serviceName]: request processing finished - ${result.toString}")
          result
        }
      }
    }
At the first trace method which reports events to datadog, the context is preserved, but inside the inner scope, the context is lost
Richard Grossman
@richiesgr_twitter
I’m not an expert of kamon 1.xx but is kamon should handle this for you ? as you don’t need to inspect the request headers by yourself at least it’s working like that in kamon 2
Maxim
@sankemax
This is a technical debt in our org. Do you think it affects the problem I've mentioned?
2 replies
Arsene
@Tochemey
Hello kamonito folks. Has anyone been able to get traces from lagom based application? I am having challenges getting traces.
Szymon Kownacki
@simonnineone
Hello there. Got a quick question since I can't find it in the docs - maybe you could point me to the right place. What are the units for "jvm.gc" and "process.hiccups" measurements?
3 replies
Looking at values in influx the units seem to differ (by a lot).
> select time,max from "process.hiccups" order by time desc limit 3;
name: process.hiccups
time                max
----                ---
1599474360000000000 25296896
1599474300000000000 10223616
1599474240000000000 35127296
> select time,max from "jvm.gc" order by time desc limit 3;
name: jvm.gc
time                max
----                ---
1599474360000000000 8
1599474300000000000 7
1599474240000000000 7
key-eugene
@key-eugene
Hello! Can someone take a look? kamon-io/Kamon#845 This fix would be really helpful:)
2 replies
Maxim
@sankemax
Hi, I would like to know whether Kamon is thread safe when using with async functions. Will it be a problem?
Maxim
@sankemax
Another question, how to work with Kamon while working with akka streams and async functions? Should I create the context and pass it manually, or can I just create a scope? If I create a scope, how do I close it?
alexander-branevskiy
@alexander-branevskiy
hello, did anyone try to use logback tracing on several execution contexts?
it seems that it doesn't work... (btw i have playapp+actors+db interaction)
alexander-branevskiy
@alexander-branevskiy

my code has the following structure

logger.info("log1")
val f: Future[SomeData] =  for {
     res <- fetchSomeDataFromDb() //this op will be performed by different ec
     _ = logger.info("log2")
} yield res

"log1" trace id is OK. it will be unique for each request
"log2" trace id is the same for all requests

alexander-branevskiy
@alexander-branevskiy
ok i think i have found an issue, the thread above can be skipped
alexander-branevskiy
@alexander-branevskiy
is there any workaround for kamon-io/Kamon#829 ?
14 replies
Ivan Topolnjak
@ivantopo

Hello @/all :wave:

Today I would like to ask for a tiny bit of help from you: we are collecting Kamon testimonials to include on our website and, if you here on Gitter, there is a good chance that you are having fun with Kamon! Would you like to share your story?

Take 30 seconds to answer these questions and I'll personally get back to you to confirm the logo and/or setup a short interview to hear your story. Thanks a lot, this means a lot to us!

Jakub Kozłowski
@kubukoz
hey, I'm trying to use the newrelic module for the first time...
oh hold on. Maybe I don't actually have a question...
ok, I do
2020-09-15 21:22:00.043 [DEBUG] c.n.t.metrics.MetricBatchSender [New Relic Metric Reporter] [-] Sending a metric batch (number of metrics: 195) to the New Relic metric ingest endpoint)
2020-09-15 21:22:00.043 [DEBUG] c.n.t.m.json.MetricBatchMarshaller [New Relic Metric Reporter] [-] Generating json for metric batch.
2020-09-15 21:22:00.751 [DEBUG] c.n.t.transport.BatchDataSender [New Relic Metric Reporter] [-] Response from New Relic ingest API: code: 403, body: {}
2020-09-15 21:22:00.752 [WARN] c.n.t.transport.BatchDataSender [New Relic Metric Reporter] [-] Response from New Relic ingest API. Discarding batch recommended.: code: 403, body: {}
2020-09-15 21:22:00.754 [ERROR] kamon.module.ModuleRegistry [New Relic Metric Reporter] [-] Reporter [New Relic Metric Reporter] failed to process a metrics tick.
com.newrelic.telemetry.exceptions.DiscardBatchException: The New Relic API failed to process this request and it should not be retried.
    at com.newrelic.telemetry.transport.BatchDataSender.sendPayload(BatchDataSender.java:134)
    at com.newrelic.telemetry.transport.BatchDataSender.send(BatchDataSender.java:81)
    at com.newrelic.telemetry.metrics.MetricBatchSender.sendBatch(MetricBatchSender.java:67)
    at kamon.newrelic.metrics.NewRelicMetricsReporter.reportPeriodSnapshot(NewRelicMetricsReporter.scala:54)
    at kamon.module.ModuleRegistry.$anonfun$scheduleMetricsTick$1(ModuleRegistry.scala:213)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
    at scala.util.Success.$anonfun$map$1(Try.scala:255)
    at scala.util.Success.map(Try.scala:213)
    at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
    at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
    at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
getting this when I keep the app running for a while, having made some requests too
I have kamon.newrelic.nr-insights-insert-key set to a value that used to work with the zipkin-compatible API (using the kamon-zipkin module)
and I'm using the suggested EU region endpoints
Jakub Kozłowski
@kubukoz
any idea about what might be wrong in the setup? The agent is on the classpath too, saw the kanela logo on startup
Ivan Topolnjak
@ivantopo
hey @kubukoz, probably @jkwatson can help you with that :)
2 replies
kr
@xtrntr
hi, i am running the kamon bundle with kanela running successfully, but i only see the host metrics modules on the status page, i don't see the akka http modules. the docs say that it should be turned on by default
29 replies
Christopher Mead
@testlabauto
hi, i have kamon working in my app but I have a question about initializing a custom metric. I call Kamon.init() at the very beginning of my App and call Kamon.counter("xx.yy.zz").withoutTags().increment() in a Future. If I want to initialize xx.yy.zz to 0 in my App, do I need to use a Gauge instead? If so, is it necessary to use kamon-scala-future to pass this gauge to the Future? In the docs, the context entries seem more simple than a gauge so I had some doubts about whether that would work or whether I was even looking at the problem correctly.
1 reply
Bastien Semene
@Sabbasth

Hi all,

I'm using kamon-prometheus and I get issues getting labels applied. This configuration does not return labels:

kamon {
  reporters = ["kamon.prometheus.PrometheusReporter"]
  prometheus {
    include-environment-tags = yes
    environment{
      service = "matcher"
      env = "dev"
    }
  }
}

Curl on the exporter:

$ curl localhost:9095
# TYPE doc_count_total counter
doc_count_total 19901.0
# TYPE batch_count_total counter
batch_count_total 778.0
# TYPE assigned_partitions gauge
assigned_partitions 11.0
# TYPE handle_batch_seconds histogram
handle_batch_seconds_bucket{le="0.005"} 0.0
handle_batch_seconds_bucket{le="0.01"} 0.0
handle_batch_seconds_bucket{le="0.025"} 0.0
handle_batch_seconds_bucket{le="0.05"} 0.0
handle_batch_seconds_bucket{le="0.075"} 0.0
handle_batch_seconds_bucket{le="0.1"} 0.0
handle_batch_seconds_bucket{le="0.25"} 24.0
handle_batch_seconds_bucket{le="0.5"} 659.0
handle_batch_seconds_bucket{le="0.75"} 742.0
handle_batch_seconds_bucket{le="1.0"} 754.0
4 replies
Christopher Mead
@testlabauto
Hi, there no longer seems to be a refine() method. How can I add multiple tags to a Counter()?
2 replies
Peter Nerg
@pnerg
Since the upgrade to Kamon 2.x I've struggled to find a way to echo a custom HTTP header. I can receive and propagate custom headers but what if I want to add a header to the response?
7 replies
Yannic Klem
@Yannic92
Hi, we're migrating to Kamon 2.x and it seems like the metric "jvm_class_loading" is no longer available. I'm aware that you've changed some naming schemes but I couldn't find it with some similar name. Was it removed completely?
6 replies
moriyasror
@moriyasror
Hi I see that support for jmx was in version 1.x what about the current version 2.x?
1 reply
Isak Rabin
@sgirabin

Hi All,

I am using kamon-prometheus (version 2.1.3) to send metrics to prometheus pushgateway.
I got issue that the message always empty. Here is my configuration

prometheus {
    start-embedded-http-server = no

    include-environment-tags = yes

    pushgateway {
      api-url = "http://stg-xxxx"

      connect-timeout = 5 seconds
      read-timeout = 5 seconds
      write-timeout = 5 seconds
    }

    metric {
      tick-interval = 1 seconds
    }

  }

This is the error message:

[2020-09-23 16:41:00,128] ERROR Failed to send metrics to Prometheus Pushgateway (kamon.prometheus.PrometheusPushgatewayReporter:44)
java.lang.Exception: Failed to POST metrics to Prometheus Pushgateway with status code [405], Body: [Method Not Allowed
]
    at kamon.prometheus.HttpClient.doMethodWithBody(HttpClient.scala:40)
    at kamon.prometheus.HttpClient.doPost(HttpClient.scala:25)
    at kamon.prometheus.PrometheusPushgatewayReporter.reportPeriodSnapshot(PrometheusPushgatewayReporter.scala:43)
    at kamon.module.ModuleRegistry$$anon$1.$anonfun$run$2(ModuleRegistry.scala:176)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
    at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:654)
    at scala.util.Success.$anonfun$map$1(Try.scala:251)
    at scala.util.Success.map(Try.scala:209)
    at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
    at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
    at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
    at scala.concurrent.impl.CallbackRunnable.run$$$capture(Promise.scala:60)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2 replies
Declan
@d-g-n
hello all, we're currently using the span.processing-time metric with the tags associated with the path, port, method etc to report on how long a given endpoint is taking, is that the correct metric to be targetting as the values coming through for it seem quite low. in the info page under 5266 we can see the unit is noted as "nanoseconds" for it, however, for an average response we're getting values like 7241728.0 which, if it's nanoseconds, is only 7 milliseconds, is this correct? should i be looking at a different metric for reporting on how long a given akka-http path is taking? we are currently using kamon-bundle 2.1.6 and a custom reporter, but the stat value is unchanged
5 replies
Declan
@d-g-n
that said, the maximum value available for the ranges are fairly reasonable so maybe i'm just overthinking it
Declan
@d-g-n
Hello again, I was just wondering if there's any strong push back against extending the statsd reporter to also have the ability to represent kamon tags as tags in this format: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-custom-metrics-statsd.html, the cloudwatch agent processes them into dimensions, would this be something that's accepted as a pr?
13 replies
Nikhil Arora
@nikhilaroratgo_gitlab
Hello everyone, We are successfully using Kamon for metrics reporting. Now, we want to setup distributed tracing. Is there any step by step guideline how to do that ?
6 replies
Peter Nerg
@pnerg
Added a PR a week ago, any comments would be appreciated...even better if it would be accepted/merged...:)
kamon-io/Kamon#858
1 reply
TjarkoG
@TjarkoG
Hi There
We are currently trying to write some Metrics and aren't realy satisfied with our sollution.
We have some Kafka-Consumers which themselves have a "metrics" map which is mutable and returns the current value when called.
to write those values to Kamon we currently have an actor that periodicaly takes the value and writes into a Kamon Gauge.
Is there a way to "register" a Gauge whose value is defined by a function?
Nikhil Arora
@nikhilaroratgo_gitlab
Hello Guys, How do I disable the span at global level? I am trying to disable this individually like this . Is there an easy way ?
instrumentation.akka.http {
    server {
      propagation {
        enabled = false
      }
      tracing {
        enabled = false
      }
    }
  }

  instrumentation.play.http {
    server {
      propagation {
        enabled = false
      }
      tracing {
        enabled = false
      }
    }
    client {
      propagation {
        enabled = false
      }
      tracing {
        enabled = false
      }
    }
  }

  instrumentation.http-server.default {
    propagation {
      enabled = false
    }
    tracing {
      enabled = false
    }
  }

  instrumentation.http-client.default {
    propagation {
      enabled = false
    }
    tracing {
      enabled = false
    }
  }
Nikhil Arora
@nikhilaroratgo_gitlab
and how can I disable JDBC spans as mentioned here https://kamon.io/docs/latest/instrumentation/jdbc/statement-tracing/ ? There is no property to disable the spans.
kr
@xtrntr
how can i ignore akka http for specific endpoints?
4 replies
Marian Diaconu
@neboduus

Hi all, i am trying out this technology, but the Zipkin Server does not receive any data from my Scala Play App.

I have done the configuration for the Scala Play Framework:

// build.sbt
lazy val root = (project in file("."))
  .enablePlugins(PlayScala, JavaAgent)

libraryDependencies ++= "io.kamon"                     %% "kamon-bundle"                  % "2.1.0",

// plugins.sbt
 addSbtPlugin("io.kamon" % "sbt-kanela-runner-play-2.7" % "2.0.6")

// app.conf
kamon.zipkin {
  # Hostname and port where the Zipkin Server is running
  #
  host = "localhost"
  port = 9411

  # Decides whether to use HTTP or HTTPS when connecting to Zipkin
  protocol = "http"
}

And run a local zipkin server:

docker run -d -p 9411:9411 openzipkin/zipkin

But whenever I go on localhost:9411 i do not see any trace

Marian Diaconu
@neboduus

Hi all, i am trying out this technology, but the Zipkin Server does not receive any data from my Scala Play App.

I have done the configuration for the Scala Play Framework:

// build.sbt
lazy val root = (project in file("."))
  .enablePlugins(PlayScala, JavaAgent)

libraryDependencies ++= "io.kamon"                     %% "kamon-bundle"                  % "2.1.0",

// plugins.sbt
 addSbtPlugin("io.kamon" % "sbt-kanela-runner-play-2.7" % "2.0.6")

// app.conf
kamon.zipkin {
  # Hostname and port where the Zipkin Server is running
  #
  host = "localhost"
  port = 9411

  # Decides whether to use HTTP or HTTPS when connecting to Zipkin
  protocol = "http"
}

And run a local zipkin server:

docker run -d -p 9411:9411 openzipkin/zipkin

But whenever I go on localhost:9411 i do not see any trace

I solved this by implementing Manual instrumentation. The automatic does not work... At least not for a simple Controller#action which does not do nothing

kr
@xtrntr
i have akka instrumentation enabled, but spans are not showing up even with this config setup:
kamon.instrumentation.akka.filters {
   actors.trace {
     includes = [
...
     ]
   }
 }

 kamon.instrumentation.akka.filters {
   actors.start-trace {
     includes = [
...
     ]
   }
 }

 kamon.trace.sampler="always"
3 replies
i do see spans from akka.http and jdbc though