Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Zhongyang Wu
    @TommyCpp
    Yeah I looked minitrace-rust performance testing before
    Julian Tescher
    @jtescher
    could be fun to see where the costs are
    I'd guess areas like context cloning, upgrading the tracer's provider reference a bunch of times, etc
    Zhongyang Wu
    @TommyCpp
    But their test is pretty simple IIRC. Just recursively create 100 spans
    Julian Tescher
    @jtescher
    yeah I think their use case is services that have one request root, and then create somewhere on the order of 100 children when servicing the request
    their bench is synchronous as I recall, most of the heavy lifting in the minitrace impl or the rustracing impl is rayon's sync channel impl
    always good to take those benchmarks with a grain of salt
    but still could be interesting to see how much overhead there is in creating a span with a hundredish children
    Zhongyang Wu
    @TommyCpp
    And they tested against tracing-opentelemetry. I modified the test to test against opentelemetry and was able to get some performance improvement
    But I guess it's expected as it reduced a layer of abstraction
    Julian Tescher
    @jtescher
    A quick check seems like perf with no exporters is ok
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let tracer = sdktrace::TracerProvider::default().get_tracer("always-sample", None);
            b.iter(|| {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    many_children time: [35.839 us 36.414 us 37.007 us]
    not actually exporting anything with those, the batch processor's std::sync::channel send is probably another ~12us
    Julian Tescher
    @jtescher
    actually a little trickier to measure exporters with this style
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let provider = sdktrace::TracerProvider::builder()
                .with_simple_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.iter(|| {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    time: [98.073 us 98.287 us 98.521 us]
    with async exporter
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let rt = tokio::runtime::Runtime::new().unwrap();
            let _g = rt.enter();
            let provider = sdktrace::TracerProvider::builder()
                .with_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.to_async(&rt).iter(|| async {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    many_children           time:   [202.30 us 203.93 us 205.56 us]
                            change: [+102.42% +104.51% +106.59%] (p = 0.00 < 0.05)
    but would have to look at the throughput vs the latency here, especially once the exporter isn't a no op
    anyway, some potentially interesting improvements to make there
    Zhongyang Wu
    @TommyCpp
    :+1: I think the advantage of batch processor is it sends fewer requests to export spans. Usually exporting spans takes much more time. So probably gonna need add some delay in NoopExporter to reflect the real situation.
    But it does look like we have some improvement to be made here
    Noel Campbell
    @nlcamp
    Does anyone know why tokio ^1.0 is listed as "optional" for opentelemetry-otlp v0.5.0 on crates.io? Is it because it's only needed for metrics but not for tracing?
    7 replies
    Julian Tescher
    @jtescher
    @nlcamp it supports async-std as well
    Noel Campbell
    @nlcamp

    @nlcamp it supports async-std as well

    :thumbsup:

    Michael Duane Mooring
    @mikeumus

    Hi :telescope:, for opentelemetry_prometheus with this init():

    opentelemetry_prometheus::exporter()
                .with_resource(Resource::new(vec![KeyValue::new("R", "V")]))
                .init()

    Where do the metrics go after calling something like meter.record_batch_with_context() ?
    I don't see them showing in the /metrics route.

    Julian Tescher
    @jtescher
    @mikeumus resources will appear on all metrics
    20 replies
    oh_lawd
    @oh_lawd:nauk.io
    [m]

    Hey. I'm sorry if I'm asking obvious things, but I'm trying to understand how I could share the same trace between two different services. I wrote an example (https://github.com/pimeys/opentelemetry-test/blob/main/src/main.rs) that has two services: client and server. The server is running and client requests the server with the trace and span ids. I'd expect in jaeger to see one trace with the client and service spans, but I instead get two: one for the client handle and another with <trace-without-root-span>.

    I've been going through the documentation now for a while, but I haven't found a way to do what I want with opentracing.

    oh_lawd
    @oh_lawd:nauk.io
    [m]
    thanks for rubber-ducking 'yall :) I understood propagators and my example works now
    Folyd
    @Folyd
    How can we write opentelemetry data as JSON format into the local file instead of export to the collector?
    1 reply
    Andrey Snow
    @andoriyu

    @jtescher so here is my setup: Datadog agent version = 7.24.0 opentelemetry = "0.12" opentelemetry-contrib = { version = "0.4.0", features = ["datadog", "reqwest-client"] }

    I have warp server running with some every request traced via warp's middle ware and some async fn #[instrumented] in handlers. Without datadog exporter everything works fine. With datadog exporter, first span that isn't outside of warp gets sent to agent fine. However, spans that are within request's handler don't get sent. Looking at logs I can tell that at least 1 span closes and request never progresses beyond that point.

    https://gist.github.com/andoriyu/b937d6608591311293e8e877e10e8e0c here are the logs. Hard to tell which reqwest's are from my handler and which are from exporter.
    I just tried blocking reqwest feature - works.
    Andrey Snow
    @andoriyu
    so it seems like somewhere in nonblocking version something isn't getting polled...
    Zhongyang Wu
    @TommyCpp
    Could you share an example?
    Andrey Snow
    @andoriyu
    @TommyCpp i don't really have a minimal project to test it right now. I will try to make on sometime this week.
    1 reply
    Kennet Postigo
    @kennetpostigo
    Hi i'm totally new to OpenTelemetry and Metric/Tracing collection. I currently am wondering after collecting a trace, how do I export that collected trace? Is it supposed to be sent and stored in a DB for later access? I currently have an analytics Dashboard UI that i've been working on that collects website analytics, and also client side errors. I want to add the ability to the Dashboard UI to show metrics and tracing data from OpenTelemetry, can anyone point me into the right direction for how to export trace/metric data?
    9 replies
    Dawid Nowak
    @dawid-nowak

    hey all; i am trying to run a very simple example like below and getting stack trace with tonic panicking thread 'tokio-runtime-worker' panicked at 'expected scheme', /home/dawid/.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-0.4.0/src/transport/service/add_origin.rs:38:42.

    Any ideas where am I going wrong?

    use opentelemetry::trace;
    use opentelemetry::trace::Tracer;
    use opentelemetry_otlp::{Protocol};
    
    #[tokio::main(worker_threads = 8)]
    async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
        let (tracer, _uninstall) = opentelemetry_otlp::new_pipeline()
        .with_endpoint("localhost:4317")
        .with_protocol(Protocol::Grpc)
        .install()?;
    
        loop {
        tracer.in_span("doing_work", |cx| {
            println!("Doing work");
                std::thread::sleep(std::time::Duration::from_secs(1))// Traced app logic here...
        });    
        }
    
        Ok(())
    }
    Zhongyang Wu
    @TommyCpp
    Could you try add http before your endpoint?
    Dawid Nowak
    @dawid-nowak
    cool, that worked, indeed adding grpc:// also worked
    and even blah://localhost:4317 :)
    Zhongyang Wu
    @TommyCpp
    Yeah basically you need give tonic a protocol
    Dawid Nowak
    @dawid-nowak
    right, great this solved, thanks. i think opentelemetry-otlp readme and most examples are using "localhost:4317" without the protocol
    Dawid Nowak
    @dawid-nowak
    one more question, does otlp supports sevice name? seems '.with_service_name("servicename")' is exposed for Jaeger pipeline config but not for otlp
    Zhongyang Wu
    @TommyCpp
    There isn't a concept called service name in otlp. The convention here is to use service.name resource to store the service name. If the backend recognize it, they can use it as service names.
    Dawid Nowak
    @dawid-nowak
    now we are flying :)