Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Jason Liu
    @JasonXZLiu
    Shovnik Bhattacharya
    @shovnik
    I have been working on migrating the CircleCI CI/CD workflows and noticed the collector build is currently failing. Is this currently being looked at?
    1 reply
    Juraci Paixão Kröhling
    @jpkrohling
    @bogdandrutu you might have missed the notifications on github, so, wanted to ping you here about this: https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/1736#discussion_r534010815
    Juraci Paixão Kröhling
    @jpkrohling
    anyone available for a review?
    Azfaar Qureshi
    @AzfaarQureshi

    @bogdandrutu (pinging you because you worked on the FluentBit loadtests)

    I've been working on migrating the CircleCI workflows to GitHub Actions and there is just 1 job left to do; loadtest. In loadtest all the tests are passing except TestLog10kDPS/FluentBitToOTLPwhich I think might have an error in the implementation. I'll attach the logs of the run in thread but the main issue im running into is that im getting conflicting messages from the logs on whether the FluentBItWriter can connect to the NewOTLPDataReceiver or not. On one hand, the logs are filled with a whole bunch of Sent: X items| Received: 0)

    2020/12/15 00:07:48 Agent RAM (RES):  49 MiB, CPU: 1.3% | Sent:     59600 items | Received:         0 items (0/sec)
    2020/12/15 00:07:51 Agent RAM (RES):  49 MiB, CPU: 0.3% | Sent:     89600 items | Received:         0 items (0/sec)
    2020/12/15 00:07:54 Agent RAM (RES):  49 MiB, CPU: 0.0% | Sent:    119600 items | Received:         0 items (0/sec)
    2020/12/15 00:07:57 Agent RAM (RES):  49 MiB, CPU: 0.0% | Sent:    149600 items | Received:         0 items (0/sec)
    ...
         validator.go:46: 
                Error Trace:    validator.go:46
                                        test_case.go:273
                                        scenarios.go:190
                                        log_test.go:78
                Error:          Not equal: 
                                expected: 0x249f0
                                actual  : 0x0
                Test:           TestLog10kDPS/FluentBitToOTLP
                Messages:       Received and sent counters do not match.

    but then I also get the following error message:

    test_case.go:312: Time out waiting for [all data items received]

    This is weird because test_case.go:312 is tc.t.Error("Time out waiting for", errMsg) meaning the errMsg in the log is saying all data items were received (which is isn't really an error 🤔).

    Every other writer for every other test is able to connect to the NewOTLPDataReceiverthough but I dont know enough about the fluentBit exporter to know why this fails.

    Any help would be appreciated, thanks!

    6 replies
    Azfaar Qureshi
    @AzfaarQureshi
    ah nvm, turns out the issue was this over here: https://github.com/open-telemetry/opentelemetry-collector/blob/master/testbed/testbed/receivers.go#L63. Works with 127.0.0.1 instead
    Mats Taraldsvik
    @meastp
    hi - I thought I would start testing hostmetrics as a first opentelemetry test, but it seems like exporting metrics to Azure Monitor / Application Insights isn't supported (in -contrib). Is this planned? Are Microsoft planning to officially support an exporter, like many other commercial product owners?
    Azfaar Qureshi
    @AzfaarQureshi
    hey @bogdandrutu open-telemetry/opentelemetry-collector#2291 is ready for another review.
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Hi @bogdandrutu could you have another look on this PR- open-telemetry/opentelemetry-collector#2251
    Tigran Najaryan
    @tigrannajaryan
    @/all I will be on PTO starting from tomorrow until Jan 4. I will attend today's SIG meeting and then will be out for the next 2 meetings. I am not sure what the attendance is going to look like given the holidays, but I will keep the meetings on the calendar in case there is people who want to meet.
    Juraci Paixão Kröhling
    @jpkrohling
    enjoy the PTO!
    Július Marko
    @juldou
    Hello, I'm a bit confused about queued_retry, I bumped opentelemetry-collector to 0.17.0 and as I had queued_retry processor specified, it produced WARNs and agents was unable to perform that retry (I know - it's deprecated). I know I should use exporter helper, but is it enabled by default? (so it should be enough to just remove queued_retry from processor, and the functionality of those retries would remain with version 0.17.0?)
    6 replies
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Hi @bogdandrutu can you have a second look for this "filterprocessor" PR open-telemetry/opentelemetry-collector#2251
    ChrisSmith
    @ChrisSmith
    Is there a proper way to log errors from a custom processor? Any examples would be great
    Azfaar Qureshi
    @AzfaarQureshi
    @bogdandrutu thanks for merging PR 1/3! I've rebased and pushed PR 2/3 open-telemetry/opentelemetry-collector#2298 :smile:
    Naga
    @tannaga
    Hello, Is there a way to access the http headers in the processors?
    1 reply
    Eric Mustin
    @ericmustin
    Friends, is there a processor that permits span attribute => resource attribute mapping? A number of use cases here but sometimes a non otel client will add host info as a span attribute that we'd like to mapped to the appropriate resource attribute key so that various exporters can pick it up correctly as the hostname info
    Július Marko
    @juldou
    Is it true that when queued_retry processor is doing retry on failure, and if it continues failing, it will be trying to retry it in endless loop till it will successfully retry? (so there is no upper bound that tells it to stop when it failed to retry too many times, or for some period of time)
    1 reply
    Cemalettin Koc
    @cemo
    Is there a way to filter spans by checking their tags?
    1 reply
    Alex Van Boxel
    @alexvanboxel
    Hi, I'm trying to get a PR green on collector-contrib. But I seem to get Lint errors on code that isn't related to my PR. Can someone give me some hints? open-telemetry/opentelemetry-collector-contrib#1892
    7 replies
    Eric Mustin
    @ericmustin

    hi friends, have a PR that's been approved by a number of folks, would be great to see this merged before next release as it's been 9 days now open-telemetry/opentelemetry-collector#2253

    And two others on contrib have been approved and sitting for a little while that i'd been keen to see merged before next release

    these are all impacting end users so would be a great help for adoption, cheers!

    Maor Goldberg
    @maorgoldberg
    Hey all I had a question regarding the otel collector exporter. Is it possible to add a logger to the exporter configuration in order to add logs to auto instrumented spans?
    1 reply
    Tristan Sloughter
    @tsloughter
    thought this might be interesting once I saw it was from somoene at datadog and worked on ddsketch implementation https://richardstartin.github.io/posts/dont-use-protobuf-for-telemetry
    1 reply
    Nikhil Shampur
    @nshampur

    I'm hitting the same issue as @anmldubey

    go generate ./...
    receiver/hostmetricsreceiver/codegen.go:17: running "mdatagen": exec: "mdatagen": executable file not found in $PATH

    make install-tools didn't resolve it for me

    2 replies
    Pavan Krishna
    @pavankrish123
    Hello Friends, I have submitted a PR open-telemetry/opentelemetry-collector-contrib#1887 to collector contrib couple of weeks back - it is currently approved state - any estimate on when it can be expected in master. A kind request to include this change in the upcoming release - Please and Thank you
    Daniel Jaglowski
    @djaglowski
    Hi All, in the last Collector SIG, there was an issue raised about how best to handle pipelines where it may be necessary to translate from one signal type to another. I drafted a proposal enhancement to address this type of scenario and would appreciate any thoughts the community may have on it: open-telemetry/opentelemetry-collector#2336
    Albert
    @albertteoh
    Hi all, I submitted this PR about a week ago and would appreciate if somebody could please take a look at it? It's mostly config, docs, tests and skeleton code.
    open-telemetry/opentelemetry-collector-contrib#1917
    Eric Mustin
    @ericmustin
    Hello friends, hope everyone had a nice holiday. Just curious when the next version release is planned? This upcoming tuesday?
    Rayhan Hossain (Mukla.C)
    @hossain-rayhan
    Hi @bogdandrutu would be great if you get time to have a look on this PR- open-telemetry/opentelemetry-collector#2251
    Pablo Baeyens
    @mx-psi
    Hi, we would like to get this PR merged before 0.18.0 is released, it is a small change on a previous PR by me open-telemetry/opentelemetry-collector-contrib#1962 It has been already reviewed by someone from Datadog. Could it be merged?
    Kartik Verma
    @vkartik97

    Hi,
    I are using otel collector and with open census and we are getting this error

    exporterhelper/queued_retry.go:239        Exporting failed. The error is not retryable. Dropping data.        {"component_kind": "exporter", "component_type": "prometheusremotewrite", "component_name": "prometheusremotewrite", "error": "Permanent error: [Permanent error: nil data point. opencensus.io/http/client/roundtrip_latency is dropped; Permanent error: nil data point. opencensus.io/http/client/received_bytes is dropped; Permanent error: nil data point. grpc.io/server/sent_messages_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/received_bytes_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/sent_bytes_per_rpc is dropped; Permanent error: nil data point. opencensus.io/http/client/sent_bytes is dropped; Permanent error: nil data point. grpc.io/server/received_messages_per_rpc is dropped; Permanent error: nil data point. grpc.io/server/server_latency is dropped; Permanent error: nil data point. grpc.io/server/completed_rpcs is dropped]", "dropped_items": 9}

    screenshot
    image.png

    the config is :

    extensions:
      health_check:
      pprof:
        endpoint: 0.0.0.0:1777
    
    receivers:
        opencensus:
            endpoint: "0.0.0.0:55678"
    
    processors:
        batch:
        memory_limiter:
          ballast_size_mib: 683
          limit_mib: 1500
          spike_limit_mib: 512
          check_interval: 5s
        queued_retry:
    
    exporters:
      logging:
        logLevel: debug
      jaeger:
        endpoint: <jaeger_collector_url>
        insecure: true
      prometheusremotewrite:
        endpoint: <prom_remote_url>
    
    service:
      pipelines:
        traces:
          receivers: [opencensus]
          processors: [memory_limiter, batch, queued_retry]
          exporters: [jaeger]
        metrics:
          receivers: [opencensus]
          processors: [memory_limiter, batch, queued_retry]
          exporters: [prometheusremotewrite]
      extensions: [health_check, pprof]

    Please help

    Eric Mustin
    @ericmustin
    repost from ruby gitter, do we communicate any info about "lang" to the collector? is there any info available to an exporter on a span's lang?
    1 reply
    Pavan Krishna
    @pavankrish123

    Hello Friends, I have submitted a PR open-telemetry/opentelemetry-collector-contrib#1887 to collector contrib couple of weeks back - it is currently approved state - any estimate on when it can be expected in master. A kind request to include this change in the upcoming release - Please and Thank you

    Hello Folks, Happy Monday - any updates on this review request? Please and Thank you.

    Sandeep Raveesh
    @crsandeep
    Hello All. While using prometheus receiver I noticed that the labels and metrics auto generated by prometheus are dropped by prometheus receiver. Example - metric "up and scrape_duration_seconds" and labels "job and instance". Is this intentional or may be a bug?
    Juraci Paixão Kröhling
    @jpkrohling
    @owais did anything happen to the release notes for 0.18.0 ?
    2 replies
    Maor Goldberg
    @maorgoldberg
    Hey all, what would you say are essential processors in the collector config. Currently I am using batch, memory limiter, probabilistic sampler and queued retry do you see the need to add more?
    2 replies
    Tigran Najaryan
    @tigrannajaryan
    @/all if you are interested in logs please check out the proposal to contribute Stanza to OpenTelemetry with the goal of using it as the log collection library for OpenTelemetry Collector: open-telemetry/community#605
    1 reply
    Eric Mustin
    @ericmustin
    what's like, the canonical opentelemetry-collector-contrib demo app these days? Or, what's the right dockerr image to point to? the example in example/tracing feels a bit stale
    2 replies
    Julian Fell
    @jtfell

    Hi all. I'm trying to export metrics from a NodeJs app to the open-telemetry-exporter and am getting a 501 - Not Implemented error. Is this the current status for this integration or am I using an outdated version of something? Thanks!

    Versions:

    • Latest docker image of otel/opentelemetry-collector (Published 2 days ago - 3c1ed120c45b)
    • "@opentelemetry/exporter-collector": "0.14.0"

    collector config:

    receivers:
      otlp:
        protocols:
          grpc:
          http:
            cors_allowed_origins:
            - http://*
            - https://*
    
    exporters:
      zipkin:
        endpoint: "http://zipkin:9411/api/v2/spans"
    
    processors:
      batch:
      queued_retry:
    
    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [zipkin]
          processors: [batch, queued_retry]
    Traces are working with my current setup btw!
    Joshua MacDonald
    @jmacd

    The @opentelemetry Metrics Workshop happens tomorrow at 9:30 PST at https://zoom.us/j/8203130519

    9:30 - 9:45am PST: Opening remarks, organization of the day, how we got here, and the many streams of work. We have an API, language SDKs, a Protocol, a Collector, Receivers, Exporters, Semantic Conventions, and a connection to Tracing [organizer: OTel Committee]

    9:45-10:15am PST: Community building: @opentelemetry has first-class @OpenMetricsIO support, and @PrometheusIO users are first-class users. These projects are meant to get along, and committed to it. [organizer: Alolita Sharma]

    10:15 - 11:00am PST: @opentelemetry Collector deployment models (e.g., Daemonset vs. Statefulset), Agents, Sidecars, and first-class support for Prometheus Remote-Write [organizer: Jaana Dogan]

    11:30 - 12:15pm The Metrics API and Data Model, how we integrated the @opencensusio feature set (Tracing and Metrics, combined!) with the @OpenMetricsIO and StatsD data models. Presenting the six @opentelemetry metric instruments [organizer: Bogdan Drutu]

    12:15 - 13:00pm Histograms! How we’ll support high-resolution and variable-boundary histograms and the connection to sampling metric events [organizers: Josh MacDonald, Michael Gerstenhaber]

    13:30 - 15:00pm Questions and answers

    1 reply
    PCDiver
    @PCDiver
    What is the best way to install the Otel Collector on Windows as a service? I can't use the MSI package because I have to run two instances of the collector on the same machine (for testing). One as an Agent and one as a collector service. Thanks.
    1 reply
    Albert
    @albertteoh
    Hi all, I would appreciate if this PR could be reviewed: open-telemetry/opentelemetry-collector-contrib#2012. It implements the spanmetrics processor logic.
    Eric Mustin
    @ericmustin
    on lang sdk's should OTEL_EXPORTER_OTLP_ENDPOINT point to host:port or host:port/v1/traces ...if i don't include path should it append v1/traces by default?
    3 replies
    Bogdan Drutu
    @bogdandrutu
    For all vendors who have exporters in the contrib repo (or any other repository), the queue_retry processor was deprecated long time ago. This PR https://github.com/open-telemetry/opentelemetry-collector/pull/2380/ will remove it, so if you do not have the new queue retry enabled for your exporter see an example here on how simple is to add it open-telemetry/opentelemetry-collector#2307
    2 replies
    Rashmi Modhwadia
    @rushminatorr
    I am using the collector, i have zpages setup with defaults, i dont know how to access it? /zpages gives a 404, i know its running but do not know the endpoint. anyone has any tips please?
    4 replies
    Rashmi Modhwadia
    @rushminatorr

    I am using collector and have traces and metrics generated and can see them on console(i used the log exporter), I cant see metrics to prometheus. I have a basic prometheus docker conatiner, scraping from 9090 port. I have a prometheus exporter setup pointing to 9090 port. Unable to see metrics and not sure if the setup is complete or a way to debug.
    Would someone be able to point me to right direction or have any suggestion for me please?

    otel collector config

      prometheus:
        endpoint: "localhost:19090"
        namespace: "default"
        const_labels:
          env: rush
        send_timestamps: true

    Docker-compose:

     prometheus:
        image: prom/prometheus:latest
        container_name: prometheus
        ports:
          - "19090:9090"
        volumes:
        - ./prometheus/prometheus.yaml:/etc/prometheus/prometheus.yaml

    prometheus_config.yml

    global:
      scrape_interval: 10s
    
    scrape_configs:
      - job_name: 'prometheus'
        scrape_interval: 5s
        static_configs:
          - targets: ['localhost:9090']
    McSick
    @McSick
    Anyone have example config for the k8sproccessor in collector-contrib? Documentation is lacking https://pkg.go.dev/github.com/open-telemetry/opentelemetry-collector-contrib/processor/k8sprocessor
    3 replies