Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Endre Karlson
    @ekarlso

    @jpkrohling Have you seen this before with OTEL Colelctor Operator?

    {"level":"error","ts":1605103614.469881,"logger":"controllers.OpenTelemetryCollector","msg":"failed to reconcile daemon sets","error":"failed to reconcile the expected daemon sets: failed to apply changes: DaemonSet.apps \"main-collector\" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app.kubernetes.io/component\":\"opentelemetry-collector\", \"app.kubernetes.io/instance\":\"opentelemetry-system.main\", \"app.kubernetes.io/managed-by\":\"opentelemetry-operator\", \"app.kubernetes.io/name\":\"main-collector\", \"app.kubernetes.io/part-of\":\"opentelemetry\", \"fluxcd.io/sync-gc-mark\":\"sha256.S_o66xL9t1DMr3tS8jPOC8WO8DnOZ3mX1Rm3ZtvJS9M\"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\ngithub.com/open-telemetry/opentelemetry-operator/controllers.(*OpenTelemetryCollectorReconciler).RunTasks\n\t/workspace/controllers/opentelemetrycollector_controller.go:145\ngithub.com/open-telemetry/opentelemetry-operator/controllers.(*OpenTelemetryCollectorReconciler).Reconcile\n\t/workspace/controllers/opentelemetrycollector_controller.go:134\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.19.3/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.19.3/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.19.3/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.19.3/pkg/util/wait/wait.go:90"}

    seems to happen after a while
    Juraci Paixão Kröhling
    @jpkrohling
    no, I haven't -- what's this "fluxcd.io" label ?
    in any case, the operator is probably doing the wrong thing , could you open an issue ?
    Endre Karlson
    @ekarlso
    Sure
    @jpkrohling I wonder if it's the Operator taking labels from the CR instance directly which causes a mutation ?
    Juraci Paixão Kröhling
    @jpkrohling
    how does your CR look like? the operator shouldn't take the labels from the CR itself, but it should take labels from inside the spec
    Endre Karlson
    @ekarlso
    It seems that it is this that's happening
    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      annotations:
        fluxcd.io/sync-checksum: 40380fdc15e6654f514b948ee93ffef45226959a
      creationTimestamp: "2020-11-10T11:46:15Z"
      generation: 6
      labels:
        app.kubernetes.io/managed-by: opentelemetry-operator
        fluxcd.io/sync-gc-mark: sha256.S_o66xL9t1DMr3tS8jPOC8WO8DnOZ3mX1Rm3ZtvJS9M
      name: main
      namespace: opentelemetry-system
      resourceVersion: "87672233"
      selfLink: /apis/opentelemetry.io/v1alpha1/namespaces/opentelemetry-system/opentelemetrycollectors/main
      uid: 953f0d0c-ce61-43ed-a863-7f4ca13b6780
    The DaemonSet doesnt have any flux label on it so is there some update logic going wrong?
    Juraci Paixão Kröhling
    @jpkrohling
    what's fluxcd ? is it possible that it is adding this label automatically, even though the operator isn't setting it ?
    this would explain the message: the operator reconciles a set of labels without this "fluxcd" label, so, when it tries to update to the desired state, it fails
    Endre Karlson
    @ekarlso
    @jpkrohling Weave's gitops oprator
    Andrew Hsu
    @andrewhsu

    i forgot this channel was associated with the https://github.com/open-telemetry/opentelemetry-collector repo, so i’ve neglected to post relevant info here related to collector. instead, i’ve been posting my collector comments to the https://gitter.im/open-telemetry/opentelemetry-specification gitter channel instead over the past few days. (just issue triage logistics comments)

    any chance this channel can be renamed to open-telemetry/opentelemetry-collector?

    3 replies
    Derrick Burns
    @derrickburns

    I am trying to configure the opentelemetry-collector to support prometheus. I am getting this from my client :

    2020/11/12 00:16:26 rpc error: code = Unimplemented desc = unknown service opentelemetry.proto.collector.metrics.v1.MetricsService

    Here is my configuration:

    apiVersion: v1
    data:
      otel-collector-config: |-
        "exporters":
          "jaeger":
            "endpoint": "jaeger-collector.observability:14250"
            "insecure": true
          "prometheus":
            "endpoint": ":8889"
            "namespace": "monitoring"
        "extensions":
          "health_check": {}
          "zpages": {}
        "processors":
          "batch": null
          "memory_limiter":
            "ballast_size_mib": 683
            "check_interval": "5s"
            "limit_mib": 1500
            "spike_limit_mib": 512
          "queued_retry": null
        "receivers":
          "jaeger":
            "protocols":
              "grpc": null
              "thrift_http": null
          "opencensus": null
          "otlp":
            "protocols":
              "grpc": null
              "http": null
        "service":
          "extensions":
          - "health_check"
          - "zpages"
          "pipelines":
            "metrics":
              "exporters":
              - "prometheus"
              "receivers":
              - "opencensus"
            "traces":
              "exporters":
              - "jaeger"
              "processors":
              - "memory_limiter"
              - "batch"
              - "queued_retry"
              "receivers":
              - "otlp"
              - "jaeger"
              - "opencensus"
    kind: ConfigMap
    metadata:
      labels:
        app: otel-collector-conf
      name: otel-collector-conf
      namespace: observability
    3 replies
    Ideas?
    Eric Mustin
    @ericmustin
    Hello friends, is there any processor that allows users to add attributes to a span based on span events? If i understand things correctly, Some vendor exporters drop span events, so i was hoping to able to append details of span events as span attributes within a processor
    2 replies
    sanjaygopinath89
    @sanjaygopinath89
    Hi Team , trying to understand the routing processor . https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/master/processor/routingprocessor. the name that we need to give in from_attribute , is that span attribute name ..?
    Ritick Gautam
    @riticksingh

    hi all I am trying converting proto files to php for OTLP/grpc exporter by

    protoc --proto_path=opentelemetry-proto/ --php_out=proto $(find opentelemetry-proto/opentelemetry -iname "*.proto")

    Is it also require to have --grpc_out= ?

    Endre Karlson
    @ekarlso
    Can I do trace filtering based on path or something in OTEL ?
    3 replies
    Andrey Afoninsky
    @afoninsky
    hello all
    I'm trying to deliver traces into two systems (jaeger / grafana tempo) in order to play with them in parallel, so I use opentelemetry-collector as a receiver
    when I send traces directly to collector - I see them in logs but downstream services don't have them (if I send directly to jaeger everything is ok)
    could you point the way how can I debug it and see where I failed? config and commands I use: https://gist.github.com/afoninsky/7496c4bb89cc461aa33ffeb64318a634
    thank you
    7 replies
    Divya Sharma
    @Div95

    Hi all, I am trying to export traces to Splunk using Splunk HEC exporter and Zipkin receiver for OpenTelemetry Collector.
    Problem: I am not able to see the traces in Splunk. However, logging exporter does show the logs for the traces.
    Could you please help me out with this? Thank you.

    Following is the content of otel-collector-config.yaml file:

    3 replies
    Screen Shot 2020-11-17 at 9.53.18 AM.png
    harini0597
    @harini0597
    Hi everyone. Can I please know the format of OpenTelemetry collector URI? Where can I look for this?
    2 replies
    Tigran Najaryan
    @tigrannajaryan
    I am thinking of cancelling the Collector SIG meeting tomorrow because of Kubecon.
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Hi everyone, I cannot build the opentelemetry-collector-contrib. Something wrong with DataDog dependency. Is it a known issue? I just pulled the latest code.
    make otelcontribcol
    GO111MODULE=on CGO_ENABLED=0 go build -o ./bin/otelcontribcol_darwin_amd64 \
            -ldflags "-X github.com/open-telemetry/opentelemetry-collector-contrib/internal/version.GitHash=c01d3b00  -X go.opentelemetry.io/collector/internal/version.BuildType=release" ./cmd/otelcontribcol
    go: finding module for package github.com/DataDog/datadog-agent/pkg/collector/corechecks/cluster
    go: found github.com/DataDog/datadog-agent/pkg/collector/corechecks/cluster in github.com/DataDog/datadog-agent v0.0.0-20201117210934-a3cba9a8cfd2
    go: github.com/DataDog/datadog-agent@v0.0.0-20201117210934-a3cba9a8cfd2 requires
        github.com/benesch/cgosymbolizer@v0.0.0: reading github.com/benesch/cgosymbolizer/go.mod at revision v0.0.0: unknown revision v0.0.0
    make: *** [otelcontribcol] Error 1
    2 replies
    Tigran Najaryan
    @tigrannajaryan
    @here a lot of people are unable to attend today. Unless I see objections I will cancel Collector SIG meeting today.
    10 replies
    @/all ^^^
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Hi @bogdandrutu @tigrannajaryan can we get this merged please. A small bug fix & readme update. Code was reviewed and approved.
    open-telemetry/opentelemetry-collector-contrib#1626
    Tigran Najaryan
    @tigrannajaryan
    /all SIG meeting is cancelled today due to Kubecon.
    Endre Karlson
    @ekarlso
    What is the meaning typically of this? Adjust - skipping unexpected point
    Endre Karlson
    @ekarlso
    Has there been issues with the OTEL collector dropping metrics?
    14 replies
    Primarely when using the PRometheus receiver
    Endre Karlson
    @ekarlso
    @jpkrohling @flands How much memory is typically needed for a "small" deployment? I've only setup the OTEL collectors as a DaemonSet and limited the Targets to target only what runs on the Node it is running on basically in my Kubernetes SD config which is / should be very few pods
    4 replies
    Endre Karlson
    @ekarlso
    image.png
    Steve ^
    Endre Karlson
    @ekarlso
    And CPU usage atm is basically idle ref ^
    Iris Grace Endozo
    @irisgve
    Hi there! Question about doing validations: I'm thinking of doing payload validations within the metrics/spans in a processor and if a metric/span does not have specific attributes, I want to reject them and propagate that error to the client with a 400. Currently, I think if the data has gone thru the receiver and the first processor rejects them, the client will only receive a 500. Are there any thoughts on supporting something like synchronous validations as a processor or is it not part of the intended design within the collector?
    6 replies
    ZhengHe
    @ZhengHe-MD
    Hi Team,
    I’d like to sample spans by internal span status, e.g. the status code, any suggestion? The original need is to sample all spans with error tag set to true.
    2 replies
    Eric Mustin
    @ericmustin
    can someone confirm my mental model here... the semantically correct way to check if a span is an 'Error' , once it's been converted to pdata, is via Span.Status()...is that correct?
    2 replies
    lashukla
    @lashukla
    Hi All, can we have otel-collector running on MacOS as a native process ?
    5 replies
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    @bogdandrutu @tigrannajaryan can any of you please help me to get this reviewed and merged today. Its a one line bug fix which replaces {{ }} with { }. Will highly appreciate. Thanks.
    open-telemetry/opentelemetry-collector-contrib#1661
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Anybody who has the power- please help me out. Can any of you please help me to get this reviewed and merged today. Its a one line bug fix which replaces {{ }} with { }. Will highly appreciate. Thanks. @bogdandrutu @tigrannajaryan @andrewhsu
    open-telemetry/opentelemetry-collector-contrib#1661
    2 replies
    Endre Karlson
    @ekarlso
    To the person that fixed the open-telemetry/opentelemetry-collector#2121 this fixed my issue with vanished metrics and dirty workaround to continously restart OTEL Collectors! thnx @kohrapha
    2 replies
    Juraci Paixão Kröhling
    @jpkrohling
    @ZhengHe-MD would you like to get on a live call, to try to sort out the easy CLA issue ?
    5 replies
    Eric Mustin
    @ericmustin
    Could I get a quick overview of the changes that were made here? https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/1489/files I think due to some timing issues the vendor exporter i'm maintaining got missed in this PR, and starting to see some bug reports now (open-telemetry/opentelemetry-collector-contrib#1684). Is the idea that, for vendor exporters, instead of attempting to match statusCode against a range (400/500 errors), you just see if status.Code() == pdata.StatusCodeError? I'd be happy to update the datadog exporter to match whatever the current spec is
    1 reply
    Pablo Baeyens
    @mx-psi
    Hi all, the hostmetrics receiver from the OpenTelemetry Collector reports metrics namespaced under system.swap. (see here) that are similar in meaning but different in naming to the system.paging. metrics that feature in the spec (see here). Is this expected? If not, what should be changed (spec/implementation)? Should I open an issue for this?
    1 reply
    Naga
    @tannaga
    Hi All, In custom processors/receivers/exporters is there a provision to send custom app metrics via the stats or obsrreport package or any other means?
    For ex> in my custom processor, there is a cache of metadata which gets refreshed periodically.. if there is any problem reloading the cache I would like to emit a metric, so that i can set up the necessary monitoring..