Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    ttys3
    @ttys3
    I got this strange error:
    OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error (Unknown error): , detailed error message: transport error
    3 replies
        let otlp_tracer = opentelemetry_otlp::new_pipeline()
            .tracing()
            .with_exporter(
                opentelemetry_otlp::new_exporter()
                    .tonic()
                    .with_endpoint(otlp_grpc_endpoint)
                    .with_protocol(Protocol::Grpc)
                    .with_timeout(Duration::from_secs(3)),
            )
            .install_batch(opentelemetry::runtime::Tokio)
            .expect("Error initializing Otlp exporter");
    
        let telemetry = tracing_opentelemetry::layer()
            .with_tracer(otlp_tracer);
    Peter Hamilton
    @hamiltop

    I've got a handful of bugs/feature requests for opentracing_datadog that I'd be happy to implement and submit a PR for, but could use some guidance.

    1. If the Datadog Propagator is enabled and the incoming headers are not present, the entire span and its children get thrown away as invalid.
    2. The current name vs resource.name solution is pretty painful. We use the different name values in datadog to track metrics across all our services and being able to differentiate between http.request and worker.job and db.query is really nice.
    3. We use service_name a lot, even within a single trace. Separating out our db.query traces under a different service name allows us to use APM to investigate db performance. A given http request will end up having the main service as one service name, and then all the dbs it accesses under individual services names. It all rolls up nicely in the UI.

    Solutions to the problems:

    1. Don't call cx.with_remote_span_context(extracted) unless there is a valid span. This is what opentelemetry-zipkin does and some local testing shows it behaves as expected. However, the tests fail and I'm not quite familiar enough to understand why.
    2. (and 3) I propose magic tags dd.name and dd.service_name that get used for those two fields when present on a span. We already do this with span.type
    26 replies
    Hector Alberto Santos Rodriguez
    @netsirius

    Hi all,

    I am trying to add traceability in my application, but I'm not able to propagate the context through the different execution threads within the application. The application is event driven, but I can't find a way to trace what happens within a single root span. When an event is received, a new span is created, instead of being created as a child of the root span.

    I've tried to inject the context, but I can't extract it. How can I propagate the context in the right way?

    2 replies
    Vibhav Pant
    @vibhavp
    Hello, I have been working on a logging SDK for opentelemetry Rust, with support for exporting to OTLP: https://github.com/vibhavp/opentelemetry-rust/tree/main. As the logging SDK is not meant for direct use by developers, but by logging libraries, I have also created adaptors for the slog and log at https://github.com/vibhavp/slog-opentelemetry and https://github.com/vibhavp/log-rs-opentelemetry respectively.
    The logging SDK is written against the main branch, so it doesn't support any released versions of opentelemetry at the moment.
    1 reply
    Itamar Turner-Trauring
    @itamarst
    does anyone have a working example of opentelemetry-otlp talkign to a third party service directly, without a collector? it seems theoretically possible but in practice I can't get it to work
    3 replies
    Keval
    @Bhogayata-Keval
    use serde::{Deserialize, Serialize};
    use std::time::Duration;
    use kafka::producer::{Producer, Record, RequiredAcks};
    use opentelemetry::{global, KeyValue};
    
    #[tokio::main]
    async fn main() {
        let mut producer =
            Producer::from_hosts(vec!("localhost:29092".to_owned()))
                .with_ack_timeout(Duration::from_secs(1))
                .with_required_acks(RequiredAcks::One)
                .create()
                .unwrap();
    
        let meter = global::meter("test");
        let counter = meter.u64_counter("my_counter").init();
        counter.add(1, &[KeyValue::new("cpu", "80")]);
        counter.add(1, &[KeyValue::new("cpu", "90")]);
    
      /* What should be my next steps to export my counter to Kafka */
    }

    I want to send some of my kubernetes resource metrics to Kafka,
    As of now, I am just trying to send some dummy data to Kafka,
    I have created a simple counter using opentelemetry, how can I send this to Kafka,
    Am I supposed to write a Kafka Exporter that fulfil the Exporter Trait ? If yes, is there any reference available ?

    There are really less number of examples for Rust

    7 replies
    Keval
    @Bhogayata-Keval

    I was looking into dynatrace's example where it exports metrics using opentelemetry
    https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-dynatrace/src/metric.rs

    fn export(&self, checkpoint_set: &mut dyn CheckpointSet) -> Result<()> {
             // I am able to put console logs upto here        
    
            checkpoint_set.try_for_each(self.export_kind_selector.as_ref(), &mut |record| {
            ..................
            // not able to put console logs inside this block
            })
    })

    the try_for_each method seems to be working in a seperate thread. In order to debug, how I can I put console logs inside that particular block ?!

    5 replies
    Arkan M. Gerges
    @arkanmgerges
    Hi I'm using opentelemetry-jaeger, the code is working but how can I get the parent context and use it in the spawned thread
    This is part of the code, it's getting a request by grpc and using a thread in order to produce a message to redpanda (kafka replacement)
    #[tracing::instrument]
    pub fn shave_all(number_of_yaks: i32) -> i32 {
        for yak_index in 0..number_of_yaks {
            info!(current_yak=yak_index+1, "Shaving in progress");
        }
    
        number_of_yaks
    }
    
    #[tokio::main]
    async fn main() -> Result<(), Box<dyn std::error::Error>> {
        global::set_text_map_propagator(opentelemetry_jaeger::Propagator::new());
        let tracer = init_tracer().expect("Failed to initialize tracer"); //calling our new init_tracer function
        tracing_subscriber::registry() //(1)
            .with(tracing_opentelemetry::layer().with_tracer(tracer)) //(3)
            .try_init()
            .expect("Failed to register tracer with registry");
    
        shave_all(3);
    
        let (producer_tx, mut producer_rx) : (mpsc::Sender<MessageChannel>, mpsc::Receiver<MessageChannel>) = mpsc::channel(32);
    
        tokio::spawn(async move {
            let my_future = async {
                let simple_producer = SimpleProducer::new();
                while let Some(message_channel) = producer_rx.recv().await {
                    message_channel.response_channel_tx.send(
                        simple_producer.produce(MessageToProduce { topic_name: message_channel.msg.topic_name,
                            payload: message_channel.msg.payload,
                            message_key: "".to_string() }, message_channel.schema_builder).await
                    ).unwrap_or(());
                }
            };
            //
            my_future.instrument(tracing::info_span!("producer")).await;
    
        });
    
        let addr = "0.0.0.0:50051".parse()?;
        Server::builder()
            .add_service(UserServiceServer::new(MyServer{ producer_tx }))
            .serve(addr)
            .await?;
    
        opentelemetry::global::shutdown_tracer_provider();
        Ok(())
    }
    image.png
    image.png
    produce span should be part of the create_user span, I need a way to make it work
    Arkan M. Gerges
    @arkanmgerges
    I made a change to the async closure to get the context inside the passed struct
        tokio::spawn(async move {
            let simple_producer = SimpleProducer::new();
            while let Some(message_channel) = producer_rx.recv().await {
                let ctx = message_channel.ctx.clone();
                let span = span!(Level::TRACE, "inside_spawned_thread");
                span.set_parent(ctx);
                let _enter = span.enter();
    
                message_channel.response_channel_tx.send(
                    simple_producer.produce(MessageToProduce { topic_name: message_channel.msg.topic_name,
                        payload: message_channel.msg.payload,
                        message_key: "".to_string() }, message_channel.schema_builder).await
                ).unwrap_or(());
            }
        });
    but I'm creating a span in between inside_spawned_thread as parent to the produce
    image.png
    maybe there is another way to do it
    Arkan M. Gerges
    @arkanmgerges
    I've solved the issue by passing the parent Span into the struct to the thread
    image.png
        tokio::spawn(async move {
            let simple_producer = SimpleProducer::new();
            while let Some(message_channel) = producer_rx.recv().await {
                let span = message_channel.span.clone();
                let _enter = span.enter();
    
                message_channel.response_channel_tx.send(
                    simple_producer.produce(MessageToProduce { topic_name: message_channel.msg.topic_name,
                        payload: message_channel.msg.payload,
                        message_key: "".to_string() }, message_channel.schema_builder).await
                ).unwrap_or(());
            }
        });
    Keval
    @Bhogayata-Keval
    https://opentelemetry.io/docs/reference/specification/metrics/api/#counter-creation
    This document includes example of C# & Python, where we can create counters that can support custom structs.
    How can I do the same in Rust ?,
    Plus what will be the Pro Con of this ? Is it a good idea using the custom struct with counters OR value recorders ?
    Fran├žois Massot
    @fmassot
    Hi there, I'm already using opentelemetry-rust for tracing and it works well for my server.
    But now I'm wondering how to collect logs from my server (and send them to a centralized logging system). Do you have any advice about how to setup that?
    1 reply
    And is there a plan to implement the logging part in opentelemetry-rust?
    2 replies
    Keval
    @Bhogayata-Keval

    Hi there, I am trying to upload a few metrics from my kubernetes cluster to my collector via opentelemetry (every 5 seconds).
    So, my code is working in an infinite loop. When I checked in for the memory leak using "Valgrind tools" it says that the memory leak is at 3 points.

    Point 1. on my init_meter function:

    pub fn init_meter() -> metrics::Result<PushController> {
        let export_config = ExportConfig {
            endpoint: "http://localhost:<PORT>".to_string(),
            protocol: Protocol::Grpc,
            ..ExportConfig::default()
        };
        opentelemetry_otlp::new_pipeline()
            .metrics(tokio::spawn, delayed_interval)
            .with_exporter(
                opentelemetry_otlp::new_exporter()
                    .tonic()
                    .with_export_config(export_config),
            )
            .with_aggregator_selector(selectors::simple::Selector::Exact)
            .build()
    }

    Point 2. When I record values using opentelemetry's record API

    Point 3. Moreover, If I pass any KeyValue attributes inside record call - then the RSS memory value rapidly grows over time !!

    I am wondering if the root cause is in my code only. OR is it some issue with the SDK functions,
    Is there any methods to release memory occupied by new_pipeline method OR record method ?
    How should I resolve this ?

    Or Ricon
    @rikonor_twitter
    Hello, I'm following this issue for customizing histogram buckets when using opentelemetry-prometheus (open-telemetry/opentelemetry-rust#673). Is it simply not possible to have anything but the deafult buckets at the moment?
    3 replies
    zz
    @zzhengzhuo
    Hi, is there any way to collect panic info?
    Thanks
    zz
    @zzhen:matrix.org
    [m]
    Fix it. I should set hook firstly. ­čśé
    Anders Daljord Morken
    @amorken

    Hi!

    I have a concern regarding the length of attributes in spans and events. The tracing macros and annotations make it awfully easy to add overly large attribute values to spans and events, as it's very convenient to pull in the output of Debug::fmt on the field value. This is a bit of a footgun, as I see that we sometimes end up with enormous spans being (attempted) sent over the wire, even being rejected by the GRPC APIs for exceeding the message size limit.

    I've done a bit of very casual hackery to see if there was a sanitary way of enforcing an attribute value size limit, and I guess the best place to do it would be the (Batch)SpanProcessor. The span processor doesn't really do any introspection and modification of the span messages it has received, though, so this would be a departure from the current practice. The alternative is to enforce it during Span::set_attribute(), I guess, but that doesn't go entirely well with the KeyValue type only transforming the value to a string during serialization time.

    4 replies
    Thoughts?
    ALso, it'd be nice to be able to leave some trace (no pun intended...) of the truncation somewhere, but I'm not sure if there's an appropriate place for that either.
    Anders Daljord Morken
    @amorken
    @TommyCpp Thanks, yeah, I have looked at this and the provided defaults are fine, I think. It's the individual bloated values that are my problem. Naturally I could work on bounding these myself, but it would be nice with a safety net to maintain the functioning of the telemetry system.
    4 replies
    Keval
    @Bhogayata-Keval
    Is there a way to collect data from prometheus endpoint using otel-collector rust ?
    I checked out this particular crate : https://docs.rs/opentelemetry-prometheus/0.10.0/opentelemetry_prometheus/
    but it seems to contain only the exporter functions
    2 replies
    Meenal Mathur
    @meenal-developer

    Is there any similar package in RUST for receiving OTLP data?

    https://pkg.go.dev/go.opentelemetry.io/collector@v0.51.0/receiver/otlpreceiver

    I have export the traces using OTLP now I want to collect these traces in RUST so can you please help me with some example ?

    3 replies
    Carter Socha
    @cartersocha
    Hey @GaryPWhite are you still interested in working on the rust service for the community demo? For whatever reason I can't @ you on Github
    4 replies
    ControlCplusControlV
    @ControlCplusControlV
    Hey does anyone have a moment to explain how I could test that my metric tracking is working properly? I am trying to write an integration test for my /metrics endpoint but every time I query it with reqwest it get back a blank body
    ControlCplusControlV
    @ControlCplusControlV

    I am using opentelemetry but running into scoping issues, I defining a counter as shown below and incrementing it, then testing to see if it appears on my /metrics endpoint

        let meter = global::meter("service");
        let counter = meter.u64_counter("counter").init();
        counter.add(1, &[]);

    however it stops working depending on where it's defined, although it should be global

        // if I define a counter here
        // the endpoint /metrics won't report it 
       let handle = tokio::spawn(async move {
            let exporter = match opentelemetry_prometheus::exporter().try_init() {
                Ok(exporter) => exporter,
                Err(err) => {
                    return Err(anyhow!(
                        "Failed to creat prometheus serve metrics {:?}",
                        err
                    ))
                }
            };
            // ...
            // For some reason I am only able to see counters reported on /metrics when they are defined within this scope
    
        });
        // if I define a counter here neither will /metrics report it
    3 replies
    Anthony Ha
    @Awfa
    For opentelemetry metrics, I'm looking at exporting metrics via otlp. Is there a way for an application to filter metrics by meter name and instrument name for export?
    avr2j
    @varadarajana
    Hi! I am trying to use the opentelemetry tracer's is_recording attribute to select if a current span can be prevented from being recorded. I am also using Rust tokio's tracers' instrumentation macro (#[instrument]. The instrumentation macro does not have is_recording method, so is there a way to to use an opentelemetry instrumentation macro?
    dallin-defimono
    @dallin-defimono
    Good afternoon all, I cant seem to find any way to integrate the opentelemetry-datadog crate with actix-web. Any tips?
    1 reply
    Nicholas Wehr
    @wwwehr
    hey gang - I want to send my telemetry traces directly to managed elasticsearch(opensearch) on AWS. I don't see how to do this without a collector. Does that sound right to you? Thanks!
    avr2j
    @varadarajana
    @wwwehr I use jaeger and i am able to send to jaeger directly as well as through a collector. I have not worked with AWS ES telemetry options, I think it should be possible
    Mariusz
    @soy_dev:matrix.org
    [m]
    hello everyone! a newbie user of opentelemetry here; I configured tracing and it works well, however from time to time I get some raw stderr output, like OpenTelemetry trace error occurred. [...]. is there a way of integrating these logs into the tracing ecosystem, so that grafana doesn't complain about parsing them?
    3 replies
    restioson
    @restioson:breadpirates.chat
    [m]

    hi, we're experiencing a lot of

    OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full

    logs whenever we get ratelimited by Grafana Agent - is there a way to, in response to ratelimit, drop some spans? or is the best way around this to stop logging channel is full in set_error_handler?

    restioson
    @restioson:breadpirates.chat
    [m]
    would setting the timeout to the collector help? what would the behaviour be when timeout is reached?
    Or Ricon
    @rikonor
    Does anyone know how to define a metrics namespace when using the opentelemetry_prometheus crate? E.g if you look at the official example the resulting metrics do not have a namespace even though there's a meter with a my-app name defined. See https://docs.rs/opentelemetry-prometheus/latest/opentelemetry_prometheus/#prometheus-exporter-example
    Keval
    @Bhogayata-Keval
    Can I change resource level attributes in an existing OTLPMetricsPipeline ?
    Keval
    @Bhogayata-Keval

    Using this particular proto file
    https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto
    I have created a grpc server in Rust and implemented the export method like this :

    impl MetricsService for MyMetrics {
        async fn export(
            &self,
            request: Request<ExportMetricsServiceRequest>,
        ) -> Result<Response<ExportMetricsServiceResponse>, Status> {
            println!("Got a request from {:?}", request.remote_addr());
            println!("request data ==> {:?}", request);
    
    
            let reply = metrics::ExportMetricsServiceResponse {};
            Ok(Response::new(reply))
        }
    }

    To test this code,
    1) I created a grpc client in node.js with same proto file and called the export method - which worked as expected.


    2) Then, I used otlpmetricsexporter in node.js (instead of making an explicit call to export method), in this case, I am not receiving the request on Rust grpc server.

    Getting this error :
    {"stack":"Error: 12 UNIMPLEMENTED: \n at Object.callErrorFromStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/@grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/@grpc/grpc-js/build/src/client.js:189:52)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78\n at processTicksAndRejections (internal/process/task_queues.js:75:11)","message":"12 UNIMPLEMENTED: ","code":"12","metadata":"[object Object]","name":"Error"}

    My Rust Grpc server is running @ [::1]:50057
    so, I used OTEL_EXPORTER_OTLP_ENDPOINT=[::1]:50057 env while running my node.js exporter

    What could have gone wrong ?!

    Keval
    @Bhogayata-Keval
    This is the git repo of my rust code : https://github.com/Bhogayata-Keval/rust-grpc-demo
    can this act as an OTLP receiver for metrics ?!
    Spencer Gilbert
    @spencergilbert
    Hey, I'm curious if opentelemetry-rust/opentelemetry-proto could be published?
    Velichko Anton
    @tonyvelichko
    I have the same question regarding the opentelemetry-jaeger crate.