THIS FORUM IS NO LONGER ACTIVE. Join us on CNCF Slack: https://cloud-native.slack.com/archives/CJFCJHG4Q.
2021-03-08T08:33:05.129Z info service/service.go:411 Starting OpenTelemetry Collector... {"Version": "v0.20.0", "GitHash": "a10a1a7a", "NumCPU": 2}
2021-03-08T08:33:05.136Z info service/service.go:592 Using memory ballast {"MiBs": 204}
2021-03-08T08:33:05.136Z info service/service.go:255 Setting up own telemetry...
2021-03-08T08:33:05.219Z info service/telemetry.go:102 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": 0, "service.instance.id": "d3dd5ed8-a0d0-44df-b17a-185a4001420f"}
2021-03-08T08:33:05.220Z info service/service.go:292 Loading configuration...
2021-03-08T08:33:05.222Z info service/service.go:303 Applying configuration...
2021-03-08T08:33:05.222Z info service/service.go:324 Starting extensions...
2021-03-08T08:33:05.222Z info builder/extensions_builder.go:53 Extension is starting... {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check"}
2021-03-08T08:33:05.222Z info healthcheckextension/healthcheckextension.go:40 Starting health_check extension {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "config": {"TypeVal":"health_check","NameVal":"health_check","Port":13133}}
2021-03-08T08:33:05.222Z info builder/extensions_builder.go:59 Extension started. {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check"}
2021-03-08T08:33:05.223Z info builder/exporters_builder.go:306 Exporter is enabled. {"component_kind": "exporter", "exporter": "logging"}
2021-03-08T08:33:05.223Z info service/service.go:339 Starting exporters...
2021-03-08T08:33:05.223Z info builder/exporters_builder.go:92 Exporter is starting... {"component_kind": "exporter", "component_type": "logging", "component_name": "logging"}
2021-03-08T08:33:05.223Z info builder/exporters_builder.go:97 Exporter started. {"component_kind": "exporter", "component_type": "logging", "component_name": "logging"}
Error: cannot setup pipelines: cannot build pipelines: error creating processor "memory_limiter" in pipeline "metrics": checkInterval must be greater than zero
2021/03/08 08:33:05 application run finished with error: cannot setup pipelines: cannot build pipelines: error creating processor "memory_limiter" in pipeline "metrics": checkInterval must be greater than zero
Can you help me understand this? If I install the OpenTelemetry Collector do I need to have a jaeger agent running or is it enough that from the application's instrumentation (I use jaeger libraries) it sends it to the opentelemetry collector?
And another question, if my application only sends traces of jaeger I can deactivate the receivers of prometheus and zipkin and leave only jaeger, right?
Hi, I’m newbee in opentelemetry . could I have get some examples for k8s processer?
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sprocessor
i mapped
- from: resource_attribute
name: k8s.pod.ip
with extracted istio attributes, but couldn’t see other mapped k8s attributes.
Hi. I'm new to open-telemetry. Trying to use https://github.com/garthk/opentelemetry_honeycomb#opentelemetryhoneycomb with an elixir app on a kubernetes cluster. Using a config, in config.exs
of
config :opentelemetry,
processors: [
otel_batch_processor: %{
exporter:
{OpenTelemetry.Honeycomb.Exporter,
write_key: System.get_env("HONEYCOMB_WRITEKEY"), dataset: "api-telemetry"}
}
Should I also be setting an :api_endpoint
option? (seen in: https://hexdocs.pm/opentelemetry_honeycomb/OpenTelemetry.Honeycomb.Config.html#t:config_opt/0). Thanks!
OTL_COLLECTOR: |
extensions:
health_check:
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'xxxxxxxx'
scrape_interval: 5s
static_configs:
- targets: ["localhost:9103"]
otlp:
protocols:
grpc:
processors:
batch:
spanmetrics:
metrics_exporter: prometheus
exporters:
jaeger_thrift:
url: ${"${JAEGER_ENDPOINT}"}
prometheus:
endpoint: "0.0.0.0:9102"
service:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics, batch]
exporters: [jaeger_thrift]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
extensions: [health_check]
Hi. Could somebody advice on how to setup “trace ID aware load balancing”? I’m trying to setup OTel collector cluster which can sample successful traces, but not sample a trace with atleast one erroneous spaan.
I see this note on tail sampling processor repo “Technically, trace ID aware load balancing could be used to support multiple collector instances, but this configuration has not been tested. “.
I keep reading that it is recommended to run collector as an agent on the host container and additionally we can run collector as a gateway service as well. It tells me that running the collector as an agent is a must and running the collector as a gateway service is optional?
Design doc. Imo, if your all ready have old collector agents in system, like jaeger agents, probably run collector as standalone service is better, you needn’t change every applications
I keep reading that it is recommended to run collector as an agent on the host container and additionally we can run collector as a gateway service as well. It tells me that running the collector as an agent is a must and running the collector as a gateway service is optional?
Design doc. Imo, if your all ready have old collector agents in system, like jaeger agents, probably run collector as standalone service is better, you needn’t change every applications
If you just introduct opentelemetry first time run collector as an agent maybe better, application can report data to nearest collector.
Hey all, looking for some advice on how to structure spans that we have implemented at the transport layer as opposed to application layer. We have TCP load balancer in our edge network that manages incoming connections/certs. We recently implemented 3 different spans: one for the connection establishment (a parent span), another for fetching the cert (child), and finally one for how long to proxy to the corresponding backend service (child).
Given the connection establishment span is at the TCP layer, this makes it difficult to extract context from a client side span since we don't have access to HTTP headers (at least in that moment). Ideally, our client side spans would be the parent of the connection establishment span.
I'm wondering if this is possible, or if our spans should be setup differently. There aren't too many examples online of running distributed traces below the application layer.
Hi all, I'm trying to dynamically turn on/off traces in opentelemetry agent level. I used Dotel.traces.sampler.
java -javaagent:opentelemetry-javaagent-all.jar
-Dotel.traces.exporter=otlp \
-Dotel.exporter.otlp.endpoint=http://localhost:4317 \
-Dotel.otlp.span.timeout=4000 \
-Dotel.resource.attributes=service.name=pet-clinic \
-Dotel.traces.sampler="always_off" \
-jar target/spring-petclinic-2.4.5.jar
But the problem is when I need to turn on(Dotel.traces.sampler="always_on") traces I should stop the process and rerun the command by changing the environment variable. It means I need to restart my application too. Is there any alternative solution to turn on/off traces without restarting the application.
Hello team, I'm wondering how to make use of Link
in grafana query
As current description of Link
A Span may be linked to zero or more other Spans (defined by SpanContext) that are causally related. Links can point to Spans inside a single Trace or across different Traces. Links can be used to represent batched operations where a Span was initiated by multiple initiating Spans, each representing a single incoming item being processed in the batch.
Once we attach Link
to span, how do we make use of it in grafana query?
Suppose I attach a Link
to span A (in traceA ) to spanB (in traceB), can I get result of traceB when querying traceA in grafana?
Hello team we have an issue with our logzio exporter users are getting errors using it from opentelemetry-collector-contrib image version >= 0.24
otel-agent | Error: cannot build pipelines: cannot build builtExporters: error creating logzio exporter: mkdir /tmp: permission denied
otel-agent | 2021/06/07 10:20:51 application run finished with error: cannot build pipelines: cannot build builtExporters: error creating logzio exporter: mkdir /tmp: permission denied
The exporter works fine with version <=0.23.
Does anyone know what could cause those errors from version 0.24? And what can we do to solve it?
Hey guys, when I submit PR with #1973 and running the workflows, something wrong happened with test-coverage:
{'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found’)}
How should I solve this problem?