Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Michael Duane Mooring
    @mikeumus
    oh
    Julian Tescher
    @jtescher
    or if you have an existing prometheus registry set up from the above link you can specify it via https://docs.rs/opentelemetry-prometheus/0.5.0/opentelemetry_prometheus/struct.ExporterBuilder.html#method.with_registry
    Michael Duane Mooring
    @mikeumus
    @TommyCpp is that different from the /metrics endpoint here?
    https://www.jaegertracing.io/docs/1.21/cli/#jaeger-all-in-one
    Zhongyang Wu
    @TommyCpp
    Yeah I believe that endpoint gives you metrics about Jaeger itself. Not your application
    Michael Duane Mooring
    @mikeumus
    oh okay
    my architeture is MacOS client app -> jaeger/all-in-one, so I don't know if I want to setup a web server on the client app for Prometheus to try to hit
    Zhongyang Wu
    @TommyCpp
    To expose your metrics from you application. You should have a similar /metrics endpoint and ask your prometheus to scrape it. opentelemetry-prometheus can help you setup the handler of /metrics endpoint
    Michael Duane Mooring
    @mikeumus
    k
    Zhongyang Wu
    @TommyCpp
    Hmmm it does seem more like a push model here. One solution could be use opentelemetry-otlp and convert it into prometheus format in opentelemetry-collector
    Or you can try prometheus pushgateway
    Michael Duane Mooring
    @mikeumus
    yeah, I'm going to be pushing the data off the client app one way or another, can't have Prometheus trying to get metrics directly from the client app
    okay let me look into these options, thank you @TommyCpp and @jtescher ๐Ÿ™‡๐Ÿปโ€โ™‚๏ธ
    Zhongyang Wu
    @TommyCpp
    Yeah np. I believe there are lots of discussion and solutions from prometheus community on how to handle your situation. Good luck
    Michael Duane Mooring
    @mikeumus
    Michael Duane Mooring
    @mikeumus
    @TommyCpp @jtescher the pushgateway worked! Thanks again guys :smiley:
    Zhongyang Wu
    @TommyCpp
    Yeah I looked minitrace-rust performance testing before
    Julian Tescher
    @jtescher
    could be fun to see where the costs are
    I'd guess areas like context cloning, upgrading the tracer's provider reference a bunch of times, etc
    Zhongyang Wu
    @TommyCpp
    But their test is pretty simple IIRC. Just recursively create 100 spans
    Julian Tescher
    @jtescher
    yeah I think their use case is services that have one request root, and then create somewhere on the order of 100 children when servicing the request
    their bench is synchronous as I recall, most of the heavy lifting in the minitrace impl or the rustracing impl is rayon's sync channel impl
    always good to take those benchmarks with a grain of salt
    but still could be interesting to see how much overhead there is in creating a span with a hundredish children
    Zhongyang Wu
    @TommyCpp
    And they tested against tracing-opentelemetry. I modified the test to test against opentelemetry and was able to get some performance improvement
    But I guess it's expected as it reduced a layer of abstraction
    Julian Tescher
    @jtescher
    A quick check seems like perf with no exporters is ok
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let tracer = sdktrace::TracerProvider::default().get_tracer("always-sample", None);
            b.iter(|| {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    many_children time: [35.839 us 36.414 us 37.007 us]
    not actually exporting anything with those, the batch processor's std::sync::channel send is probably another ~12us
    Julian Tescher
    @jtescher
    actually a little trickier to measure exporters with this style
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let provider = sdktrace::TracerProvider::builder()
                .with_simple_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.iter(|| {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    time: [98.073 us 98.287 us 98.521 us]
    with async exporter
    #[derive(Debug)]
    pub struct NoopExporter;
    
    #[async_trait::async_trait]
    impl opentelemetry::sdk::export::trace::SpanExporter for NoopExporter {
        async fn export(
            &mut self,
            _batch: Vec<opentelemetry::sdk::export::trace::SpanData>,
        ) -> opentelemetry::sdk::export::trace::ExportResult {
            Ok(())
        }
    }
    
    fn criterion_benchmark(c: &mut Criterion) {
        c.bench_function("many_children", |b| {
            let rt = tokio::runtime::Runtime::new().unwrap();
            let _g = rt.enter();
            let provider = sdktrace::TracerProvider::builder()
                .with_exporter(NoopExporter)
                .build();
            let tracer = provider.get_tracer("always-sample", None);
            b.to_async(&rt).iter(|| async {
                fn dummy(tracer: &sdktrace::Tracer, cx: &opentelemetry::Context) {
                    for _ in 0..99 {
                        tracer.start_with_context("child", cx.clone());
                    }
                }
    
                tracer.in_span("root", |root| dummy(&tracer, &root));
            });
        });
    }
    many_children           time:   [202.30 us 203.93 us 205.56 us]
                            change: [+102.42% +104.51% +106.59%] (p = 0.00 < 0.05)
    but would have to look at the throughput vs the latency here, especially once the exporter isn't a no op
    anyway, some potentially interesting improvements to make there
    Zhongyang Wu
    @TommyCpp
    :+1: I think the advantage of batch processor is it sends fewer requests to export spans. Usually exporting spans takes much more time. So probably gonna need add some delay in NoopExporter to reflect the real situation.
    But it does look like we have some improvement to be made here
    Noel Campbell
    @nlcamp
    Does anyone know why tokio ^1.0 is listed as "optional" for opentelemetry-otlp v0.5.0 on crates.io? Is it because it's only needed for metrics but not for tracing?
    7 replies
    Julian Tescher
    @jtescher
    @nlcamp it supports async-std as well
    Noel Campbell
    @nlcamp

    @nlcamp it supports async-std as well

    :thumbsup:

    Michael Duane Mooring
    @mikeumus

    Hi :telescope:, for opentelemetry_prometheus with this init():

    opentelemetry_prometheus::exporter()
                .with_resource(Resource::new(vec![KeyValue::new("R", "V")]))
                .init()

    Where do the metrics go after calling something like meter.record_batch_with_context() ?
    I don't see them showing in the /metrics route.

    Julian Tescher
    @jtescher
    @mikeumus resources will appear on all metrics
    20 replies
    oh_lawd
    @oh_lawd:nauk.io
    [m]

    Hey. I'm sorry if I'm asking obvious things, but I'm trying to understand how I could share the same trace between two different services. I wrote an example (https://github.com/pimeys/opentelemetry-test/blob/main/src/main.rs) that has two services: client and server. The server is running and client requests the server with the trace and span ids. I'd expect in jaeger to see one trace with the client and service spans, but I instead get two: one for the client handle and another with <trace-without-root-span>.

    I've been going through the documentation now for a while, but I haven't found a way to do what I want with opentracing.

    oh_lawd
    @oh_lawd:nauk.io
    [m]
    thanks for rubber-ducking 'yall :) I understood propagators and my example works now
    Folyd
    @Folyd
    How can we write opentelemetry data as JSON format into the local file instead of export to the collector?
    1 reply
    Andrey Snow
    @andoriyu

    @jtescher so here is my setup: Datadog agent version = 7.24.0 opentelemetry = "0.12" opentelemetry-contrib = { version = "0.4.0", features = ["datadog", "reqwest-client"] }

    I have warp server running with some every request traced via warp's middle ware and some async fn #[instrumented] in handlers. Without datadog exporter everything works fine. With datadog exporter, first span that isn't outside of warp gets sent to agent fine. However, spans that are within request's handler don't get sent. Looking at logs I can tell that at least 1 span closes and request never progresses beyond that point.

    https://gist.github.com/andoriyu/b937d6608591311293e8e877e10e8e0c here are the logs. Hard to tell which reqwest's are from my handler and which are from exporter.