Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Julian Tescher
    @jtescher
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let provider = sdktrace::TracerProvider::builder()
                .with_simple_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.iter(|| {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    time: [98.073 us 98.287 us 98.521 us]
    with async exporter
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let rt = tokio::runtime::Runtime::new().unwrap();
            let _g = rt.enter();
            let provider = sdktrace::TracerProvider::builder()
                .with_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.to_async(&rt).iter(|| async {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    many_children           time:   [202.30 us 203.93 us 205.56 us]
                            change: [+102.42% +104.51% +106.59%] (p = 0.00 < 0.05)
    but would have to look at the throughput vs the latency here, especially once the exporter isn't a no op
    anyway, some potentially interesting improvements to make there
    Zhongyang Wu
    @TommyCpp
    :+1: I think the advantage of batch processor is it sends fewer requests to export spans. Usually exporting spans takes much more time. So probably gonna need add some delay in NoopExporter to reflect the real situation.
    But it does look like we have some improvement to be made here
    Noel Campbell
    @nlcamp
    Does anyone know why tokio ^1.0 is listed as "optional" for opentelemetry-otlp v0.5.0 on crates.io? Is it because it's only needed for metrics but not for tracing?
    7 replies
    Julian Tescher
    @jtescher
    @nlcamp it supports async-std as well
    Noel Campbell
    @nlcamp

    @nlcamp it supports async-std as well

    :thumbsup:

    Michael Duane Mooring
    @mikeumus

    Hi :telescope:, for opentelemetry_prometheus with this init():

    opentelemetry_prometheus::exporter()
                .with_resource(Resource::new(vec![KeyValue::new("R", "V")]))
                .init()

    Where do the metrics go after calling something like meter.record_batch_with_context() ?
    I don't see them showing in the /metrics route.

    Julian Tescher
    @jtescher
    @mikeumus resources will appear on all metrics
    20 replies
    oh_lawd
    @oh_lawd:nauk.io
    [m]

    Hey. I'm sorry if I'm asking obvious things, but I'm trying to understand how I could share the same trace between two different services. I wrote an example (https://github.com/pimeys/opentelemetry-test/blob/main/src/main.rs) that has two services: client and server. The server is running and client requests the server with the trace and span ids. I'd expect in jaeger to see one trace with the client and service spans, but I instead get two: one for the client handle and another with <trace-without-root-span>.

    I've been going through the documentation now for a while, but I haven't found a way to do what I want with opentracing.

    oh_lawd
    @oh_lawd:nauk.io
    [m]
    thanks for rubber-ducking 'yall :) I understood propagators and my example works now
    Folyd
    @Folyd
    How can we write opentelemetry data as JSON format into the local file instead of export to the collector?
    1 reply
    Andrey Snow
    @andoriyu

    @jtescher so here is my setup: Datadog agent version = 7.24.0 opentelemetry = "0.12" opentelemetry-contrib = { version = "0.4.0", features = ["datadog", "reqwest-client"] }

    I have warp server running with some every request traced via warp's middle ware and some async fn #[instrumented] in handlers. Without datadog exporter everything works fine. With datadog exporter, first span that isn't outside of warp gets sent to agent fine. However, spans that are within request's handler don't get sent. Looking at logs I can tell that at least 1 span closes and request never progresses beyond that point.

    https://gist.github.com/andoriyu/b937d6608591311293e8e877e10e8e0c here are the logs. Hard to tell which reqwest's are from my handler and which are from exporter.
    I just tried blocking reqwest feature - works.
    Andrey Snow
    @andoriyu
    so it seems like somewhere in nonblocking version something isn't getting polled...
    Zhongyang Wu
    @TommyCpp
    Could you share an example?
    Andrey Snow
    @andoriyu
    @TommyCpp i don't really have a minimal project to test it right now. I will try to make on sometime this week.
    1 reply
    Kennet Postigo
    @kennetpostigo
    Hi i'm totally new to OpenTelemetry and Metric/Tracing collection. I currently am wondering after collecting a trace, how do I export that collected trace? Is it supposed to be sent and stored in a DB for later access? I currently have an analytics Dashboard UI that i've been working on that collects website analytics, and also client side errors. I want to add the ability to the Dashboard UI to show metrics and tracing data from OpenTelemetry, can anyone point me into the right direction for how to export trace/metric data?
    9 replies
    Dawid Nowak
    @dawid-nowak

    hey all; i am trying to run a very simple example like below and getting stack trace with tonic panicking thread 'tokio-runtime-worker' panicked at 'expected scheme', /home/dawid/.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-0.4.0/src/transport/service/add_origin.rs:38:42.

    Any ideas where am I going wrong?

    use opentelemetry::trace;
    use opentelemetry::trace::Tracer;
    use opentelemetry_otlp::{Protocol};
    
    #[tokio::main(worker_threads = 8)]
    async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
        let (tracer, _uninstall) = opentelemetry_otlp::new_pipeline()
        .with_endpoint("localhost:4317")
        .with_protocol(Protocol::Grpc)
        .install()?;
    
        loop {
        tracer.in_span("doing_work", |cx| {
            println!("Doing work");
                std::thread::sleep(std::time::Duration::from_secs(1))// Traced app logic here...
        });    
        }
    
        Ok(())
    }
    Zhongyang Wu
    @TommyCpp
    Could you try add http before your endpoint?
    Dawid Nowak
    @dawid-nowak
    cool, that worked, indeed adding grpc:// also worked
    and even blah://localhost:4317 :)
    Zhongyang Wu
    @TommyCpp
    Yeah basically you need give tonic a protocol
    Dawid Nowak
    @dawid-nowak
    right, great this solved, thanks. i think opentelemetry-otlp readme and most examples are using "localhost:4317" without the protocol
    Dawid Nowak
    @dawid-nowak
    one more question, does otlp supports sevice name? seems '.with_service_name("servicename")' is exposed for Jaeger pipeline config but not for otlp
    Zhongyang Wu
    @TommyCpp
    There isn't a concept called service name in otlp. The convention here is to use service.name resource to store the service name. If the backend recognize it, they can use it as service names.
    Dawid Nowak
    @dawid-nowak
    now we are flying :)
    Zhongyang Wu
    @TommyCpp
    :smile:
    janxyz
    @janxyz:matrix.org
    [m]
    Hey everyone, I am struggling to recreate a span context after I propagate it via some internal protocol. The span shows up and works, but it's in a different trace from the parent. So far we use the opentracing libraries and for the first time use opentelemetry, are there incompatibilities in the way trace IDs are handled or stored? We basically have Jaeger traces coming in with a high and low nibble and are trying to convert that into a otel trace ID. In logs and Unit Tests the trace IDs look the same when we call to_hex on it.
    5 replies
    janxyz
    @janxyz:matrix.org
    [m]
    Thanks, the solution by @nbaztec worked. I think I will have a look at a custom propagator as well though. I think we are doing that work manually now.
    andrew quartey
    @drexler
    Hi folks. I'm building a little demo and having a bit of trouble with the creation of child spans. Briefly, it's how to derive the correct parent context to use in the creation of the child span. I ended up doing a dubious clone. Just didn't look right. The relevant files:
    // main.rs:
    #[derive(Default)]
    pub struct MyEmployeeService {}
    
    #[tonic::async_trait]
    impl EmployeeService for MyEmployeeService {
        async fn get_all_employees(
            &self,
            request: Request<()>,
        ) -> Result<Response<GetAllEmployeesResponse>, Status> {
            let parent_ctx =
                global::get_text_map_propagator(|prop| prop.extract(&MetadataMap(request.metadata())));
            let span = global::tracer("employee-service")
                .start_with_context("get_all_employees", parent_ctx.clone());     //<---- doesn't  look right
            span.set_attribute(KeyValue::new("request", format!("{:?}", request)));
    
            let connection = database::create_connection(parent_ctx);
            let employees: Vec<Employee> = database::get_employees(&connection)
                .into_iter()
                .map(model_mapper)
                .collect();
    
            let result = GetAllEmployeesResponse { employees };
    
            Ok(Response::new(result))
        }
    }
    
    //database.rs:
    pub fn create_connection(ctx: Context) -> PgConnection {
        let tracer = global::tracer("database-tracer");
        let _span = tracer
            .span_builder("create_connection")
            .with_parent_context(ctx)
            .start(&tracer);
    
        dotenv().ok();
        let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
        PgConnection::establish(&database_url)
            .unwrap_or_else(|_| panic!("Error connecting to {}", database_url))
    }
    4 replies
    Dawid Nowak
    @dawid-nowak
    hey all; trying to figure out what is the best way to map oltp metrics to Prometheus. I have an application writing to OLTP agent->OLTP Collector-> Prometheus and I have tried different aggregators but there is always something missing for Value Recorders. Is there a place that would document how OLTP metrics are mapped to Prometheus protocol?
    Zhongyang Wu
    @TommyCpp
    There is a working group on Prometheus in OTEL. They should have done some work on that
    Also we have a opentelemetry-prometheus crate
    Dawid Nowak
    @dawid-nowak
    so here is a question for min, max, sum ,count the buckets are calculated as 'buckets = vec![min.to_u64(kind), max.to_u64(kind)];'
    so what happens if min is less than 0 ?
    9 replies
    Julian Tescher
    @jtescher
    great work everyone!
    Zhongyang Wu
    @TommyCpp
    :tada:
    Dirkjan Ochtman
    @djc
    :thumbsup:
    Dawid Nowak
    @dawid-nowak

    hey; cools stuff about 0.13.0 let's hope it is a lucky number :) anyway, small question about metrics again. if I get it right for ValueRecorders the aggregator is set once when the metric pipeline is created for example :

      controller = Some(opentelemetry_otlp::new_metrics_pipeline(tokio::spawn, delayed_interval)
                      .with_export_config(export_config)
                      .with_period(std::time::Duration::from_secs(open_telemetry.metric_window.unwrap_or_else(|| 30)))
                      .with_aggregator_selector(selectors::simple::Selector::Histogram(vec![0.0,0.1,0.2,0.3,0.5,0.8,1.3,2.1]))
                      .build()?);

    My understanding is that this will apply to all ValueRecorders and all histograms will have same buckets.
    Isn't it a bit of a limitation ?
    From my application perspective, I would like to be able to set different buckets for different metrics.

    10 replies
    janxyz
    @janxyz:matrix.org
    [m]
    Thanks! It's getting late in Europe, I'll check it first thing in the morning 🙂