@/all hey folks, this is a reminder that we are migrating to Discord for questions and chat related to Kamon. You can join our Discord here: https://discord.gg/5JuYsDJ7au
Have a great week!
i cannot find any metrics exposed via kamon-prometheus
application.conf:
kamon.prometheus {
include-environment-tags = true
embedded-server {
hostname = 0.0.0.0
port = 9404
}
}
implementation:
class SinkConnector() extends org.apache.kafka.connect.sink.SinkConnector {
val underlying: AerospikeSinkConnector = new AerospikeSinkConnector()
override def start(map: util.Map[String, String]): Unit = {
Kamon.init()
Kamon.counter("testing-kamon").withoutTags().increment()
try {
underlying.start(map)
}
catch {
case ex: Throwable =>
println(s"Failure on underlying.start($map)")
Kamon.counter("underlying-start-connector-failure").withTag("config-file",configFile).withTag("message", ex.getMessage).increment()
throw ex
}
finally {
Kamon.stopModules()
}
}
dependencies:
"io.kamon" %% "kamon-prometheus" % "2.2.2" exclude("org.slf4j", "slf4j-api"),
"io.kamon" %% "kamon-core" % "2.1.0" exclude("org.slf4j", "slf4j-api")
I already have JMX Exporter which expose Kafka metrics to 9404
, i tried to make Kamon use this port also, when i remove the application.conf and use the default value of port 9095
i cannot port-forward to this port for some reason.
I'm missing something?
Thanks!
2021-08-31 12:03:52,224 WARN Failed to attach the instrumentation because the Kamon Bundle is not present on the classpath (kamon.Init) [connector-thread-dashboard-connector-profile]
when i'm using "io.kamon" %% "kamon-prometheus" % "2.2.2" exclude("org.slf4j", "*")
- any ideas why?
Hi. We are experiencing the following WARN message:
Failed to record value [-401488] on [span.processing-time,{operation=serialize,error=false}] because the value is outside of the configured range. The recorded value was adjusted to the highest trackable value [3600000000000]. You might need to change your dynamic range configuration for this metric
So the recorded value is negative. What we use is the Kamon SpanBuilder.start(Instant)
, however the span is later (within sub-milliseconds) finished via Span.finish()
(where the underlying Clock
is used to determine the nanos of the finish time)
Could it be that this "mixing" can cause negative values being recorded?
Caffeine.newBuilder().recordStats(() -> new KamonStatsCounter("cache_name")).build();
but not sure i understand what is needed to be passed to the recordStats. i see it expects to get a supplier but this example isnt working so i probably miss something.
Caused by: java.lang.VerifyError: Expecting a stackmap frame at branch target 102
. The application loader looks like thisclass CustomApplicationLoader extends GuiceApplicationLoader {
override protected def builder(context: Context): GuiceApplicationBuilder =
super
.builder(context)
.eagerlyLoaded()
I am trying to add traceability support to a play 2.8 application with Kamon and Jaeger. I followed [instructions here] (https://kamon.io/docs/latest/reporters/jaeger/) . I am able to see the startup logs for Kanela agent as well as the Jaeger reportes as follows
[info] Running the application with the Kanela agent
_ __ _ ______
| |/ / | | \ \ \ \
| ' / __ _ _ __ ___| | __ _ \ \ \ \
| < / _` | '_ \ / _ \ |/ _` | ) ) ) )
| . \ (_| | | | | __/ | (_| | / / / /
|_|\_\__,_|_| |_|\___|_|\__,_| /_/_/_/
==============================
Running with Kanela, the Kamon Instrumentation Agent :: (v1.0.8)
--- (Running the application, auto-reloading is enabled) ---
[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9001
(Server started, use Enter to stop and go back to the console...)
2021-09-23 13:29:11,210 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - Reconfiguring Kamon with Play's Config
2021-09-23 13:29:11,211 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - play.core.server.AkkaHttpServerProvider
2021-09-23 13:29:11,213 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.i.p.GuiceModule$KamonLoader - 10 seconds
2021-09-23 13:29:11,573 [info] [play-dev-mode-akka.actor.default-dispatcher-11] k.j.JaegerReporter - Started the Kamon Jaeger reporter
Jaeger is started through a docker container with following command:
docker run -d --name jaeger1 -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 jaegertracing/all-in-one:1.25
None of the traces are visible when I try to access the APIs for my play application. Any configuration I am missing here?
kamon {
prometheus {
embedded-server {
hostname = 0.0.0.0
port = 9095
}
buckets {
time-buckets = [
0.25,
0.5,
0.75,
1,
2.5
]
information-buckets = [
1024,
2048,
4096,
]
}
}
instrumentation {
play {
server.metrics.enabled = no
http.server.tracing.enabled = no
http.client.tracing.enabled = no
}
}
modules.host-metrics.enabled = no
modules.process-metrics.enabled = no
modules.status-page.enabled = no
trace {
sampler = "never"
span-metrics = off
span-metric-tags {
upstream-service = no
parent-operation = no
}
}
}
kanela.modules {
akka {
enabled = no
}
akka-remote {
enabled = no
}
akka-remote-sharding {
enabled = no
}
}
good day. i'm having an issue where Kamon is generating strange operation names when there exists an endpoint with more than 1 verb implementation. i'm using guardrail to generate akka-http routes from an openapi spec, and then using Kamon's akka-http server integration. i couldn't find any issues that describe a similar problem, but i thought i would see if anyone here has a pointer.
i.e. i have a GET /foo
and POST /foo
, an operation name of /foo/foo
will be generated for the first defined endpoint from my openapi spec. the second defined endpoint operation name seems unaffected..
datadog-api
module. I assume sending metrics data via datadog-api
is async and not blocking metrics update, since that's the only sensible way. But it would be nice if people with more experience with kamon can confirm that.
ch.qos.logback.core.joran.spi.JoranException: Problem parsing XML document. See previously reported errors.
at ch.qos.logback.core.joran.event.SaxEventRecorder.recordEvents(SaxEventRecorder.java:65)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:151)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75)
at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150)
at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
Caused by: org.xml.sax.SAXParseException; systemId: jar:file:/Users/shaileshpatil/workspace/xxxx-jar!/logback.xml; lineNumber: 42; columnNumber: 2; The markup in the document following the root element must be well-formed.
at java.xml/com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1243)
at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:635)
at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:324)
at ch.qos.logback.core.joran.event.SaxEventRecorder.recordEvents(SaxEventRecorder.java:59)
... 18 more
Hey, I'm facing an issue with the following exception:
2021-11-29 17:44:41.710 31 WARN MacOperatingSystem:365 - Failed syctl call for process arguments (kern.procargs2), process 42643 may not exist. Error code: 22
Dependencies:
"io.kamon" %% "kamon-core" % "2.4.1",
"io.kamon" %% "kamon-bundle" % "2.0.6",
"io.kamon" %% "kamon-prometheus" % "2.0.1",
I'm trying to disable these errors with the following configurations on application.conf
:
kamon.modules {
host-metrics {
enabled = no
}
}
But its not working, any ideas?
cc: @ivantopo (:pray:)
logback
and play-framework
modules had order=2
and competed to be loaded. When logback loaded before play, everything worked fine but if it loaded after play that is when we noticed missing logback entries. The way I have worked around is to explicitly give different orderings to both above modules. But is that expected behaviour to have two modules compete to be loaded.
Hey
I have a problem with response B3 headers propagation for Kamon http4s.
implementation("io.kamon:kamon-core_2.13:2.2.3")
implementation("io.kamon:kamon-http4s-0.23_2.13:2.2.1")
implementation("io.kamon:kamon-prometheus_2.13:2.2.3")
implementation("io.kamon:kamon-jaeger_2.13:2.2.3")
Jaeger export works fine, also I see that B3 headers propagated correctly to my http4s service from a previous one but I don't see B3 headers in the server response. kamon.propagation.http.default.entries.outgoing.span
should be b3
by default.
kamon {
environment {
service = "service2"
}
trace {
join-remote-parents-with-same-span-id = no
adaptive-sampler {
groups {
metrics {
operations = ["/metrics"]
rules {
sample = never
}
}
healthz {
operations = ["/healthz"]
rules {
sample = never
}
}
}
}
}
prometheus {
start-embedded-http-server = yes
embedded-server {
hostname = "0.0.0.0"
port = 9095
metrics-path = "/metrics"
}
}
jaeger {
host = "jaeger-collector.svc.cluster.local"
port = 14268
}
}
I found a couple of issues on Github but looks like it has already been resolved.
Any thoughts on what can be wrong?
I don't understand the difference of a Context Tag and a Context Entry
Only tags seem to be able to me merged back to MDC ( see https://kamon.io/docs/latest/instrumentation/logback/#copying-context-to-the-mdc ) and when I look at https://kamon.io/docs/latest/core/context/ tag are not mentioned.
Are they both the same (but historically different)?
ws_active_connections
that contains 3 tags, tag1
with cardinality 417, tag2
with cardinality 25 and tag3
with cardinality 3.memory issues
due to the amount of time series store internally (I think is a TrieMap) performance
and cardinality
due to the amount of calls made to lookup and execute increment/decrement operationsHey All,
I am using kamon-akka-http to fetch metrics around the requests, I am currently not seeing metrics related to status code 500 instead the count is incremented for 200 status code,
for e.g. span_processing_time_seconds_count{component="akka.http.server", error="false", http_method="POST", http_status_code="200", instance="localhost:9095", job="Case-actor-metrics", operation="/case/v1/cases", span_kind="server"}
even though POST /case/v1/cases gave me 500 http status code, The metrics counter is incremented for 200 status code on the same route and request method. Is this a bug in the library or Am I missing anything here?
Hi guys, I'm struggling to configure kamon to work along with lightbend cinnamon propagation. One case is http propagation, cinnamon puts in the http client headers Cinnamon-MDC
which is values encoded in base64. What I'd like to do is accept that optional header an implement a custom http header codec so I can parse it and add it to the context without overriding the kamon ones if that makes sense.
I'm looking to rollout the change from cinnamon to kamon service to service without losing telemetry. In the docs custom codecs are mentioned, but it seems outdated since the last version doesn't have such trait.
I've tried to find examples in the repo but I'm not having any luck so far