Hey. I'm sorry if I'm asking obvious things, but I'm trying to understand how I could share the same trace between two different services. I wrote an example (https://github.com/pimeys/opentelemetry-test/blob/main/src/main.rs) that has two services: client and server. The server is running and client requests the server with the trace and span ids. I'd expect in jaeger to see one trace with the client and service spans, but I instead get two: one for the client handle and another with <trace-without-root-span>
.
I've been going through the documentation now for a while, but I haven't found a way to do what I want with opentracing.
@jtescher so here is my setup: Datadog agent version = 7.24.0
opentelemetry = "0.12"
opentelemetry-contrib = { version = "0.4.0", features = ["datadog", "reqwest-client"] }
I have warp server running with some every request traced via warp's middle ware and some async fn
#[instrumented]
in handlers. Without datadog exporter everything works fine. With datadog exporter, first span that isn't outside of warp gets sent to agent fine. However, spans that are within request's handler don't get sent. Looking at logs I can tell that at least 1 span closes and request never progresses beyond that point.
hey all; i am trying to run a very simple example like below and getting stack trace with tonic panicking thread 'tokio-runtime-worker' panicked at 'expected scheme', /home/dawid/.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-0.4.0/src/transport/service/add_origin.rs:38:42
.
Any ideas where am I going wrong?
use opentelemetry::trace;
use opentelemetry::trace::Tracer;
use opentelemetry_otlp::{Protocol};
#[tokio::main(worker_threads = 8)]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
let (tracer, _uninstall) = opentelemetry_otlp::new_pipeline()
.with_endpoint("localhost:4317")
.with_protocol(Protocol::Grpc)
.install()?;
loop {
tracer.in_span("doing_work", |cx| {
println!("Doing work");
std::thread::sleep(std::time::Duration::from_secs(1))// Traced app logic here...
});
}
Ok(())
}
to_hex
on it.
// main.rs:
#[derive(Default)]
pub struct MyEmployeeService {}
#[tonic::async_trait]
impl EmployeeService for MyEmployeeService {
async fn get_all_employees(
&self,
request: Request<()>,
) -> Result<Response<GetAllEmployeesResponse>, Status> {
let parent_ctx =
global::get_text_map_propagator(|prop| prop.extract(&MetadataMap(request.metadata())));
let span = global::tracer("employee-service")
.start_with_context("get_all_employees", parent_ctx.clone()); //<---- doesn't look right
span.set_attribute(KeyValue::new("request", format!("{:?}", request)));
let connection = database::create_connection(parent_ctx);
let employees: Vec<Employee> = database::get_employees(&connection)
.into_iter()
.map(model_mapper)
.collect();
let result = GetAllEmployeesResponse { employees };
Ok(Response::new(result))
}
}
//database.rs:
pub fn create_connection(ctx: Context) -> PgConnection {
let tracer = global::tracer("database-tracer");
let _span = tracer
.span_builder("create_connection")
.with_parent_context(ctx)
.start(&tracer);
dotenv().ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
PgConnection::establish(&database_url)
.unwrap_or_else(|_| panic!("Error connecting to {}", database_url))
}
hey; cools stuff about 0.13.0 let's hope it is a lucky number :) anyway, small question about metrics again. if I get it right for ValueRecorders
the aggregator is set once when the metric pipeline is created for example :
controller = Some(opentelemetry_otlp::new_metrics_pipeline(tokio::spawn, delayed_interval)
.with_export_config(export_config)
.with_period(std::time::Duration::from_secs(open_telemetry.metric_window.unwrap_or_else(|| 30)))
.with_aggregator_selector(selectors::simple::Selector::Histogram(vec![0.0,0.1,0.2,0.3,0.5,0.8,1.3,2.1]))
.build()?);
My understanding is that this will apply to all ValueRecorders
and all histograms will have same buckets.
Isn't it a bit of a limitation ?
From my application perspective, I would like to be able to set different buckets for different metrics.
i'm trying to use opentelemetry in an existing actix-web application, that already uses tracing
to export into jaeger. i try to configure everything like this:
use opentelemetry::{global, runtime::TokioCurrentThread};
use tracing::{subscriber::set_global_default, Subscriber};
use tracing_bunyan_formatter::{BunyanFormattingLayer, JsonStorageLayer};
use tracing_log::LogTracer;
use tracing_subscriber::{layer::SubscriberExt, EnvFilter, Registry};
pub fn get_subscriber(name: &str, env_filter: &str) -> impl Subscriber + Send + Sync {
let env_filter =
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(env_filter));
let formatting_layer = BunyanFormattingLayer::new(name.to_string(), std::io::stdout);
global::set_text_map_propagator(opentelemetry_jaeger::Propagator::new());
let tracer = opentelemetry_jaeger::new_pipeline()
.with_service_name(name)
.install_batch(TokioCurrentThread)
.expect("cannot install jaeger pipeline");
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
Registry::default()
.with(telemetry)
.with(env_filter)
.with(JsonStorageLayer)
.with(formatting_layer)
}
pub fn init_subscriber(subscriber: impl Subscriber + Send + Sync) {
LogTracer::init().expect("Failed to set logger");
set_global_default(subscriber).expect("Failed to set tracing subscriber");
}
and will later call it init_subscriber(get_subscriber("app_name", "info"))
but this fails because opentelemetry::sdk::trace::Tracer
does not implement the following traits: opentelemetry::trace::tracer::Tracer
, PreSampledTracer
.
i don't know what i'm missing here...
there is no reactor running, must be called from the context of a Tokio 1.x runtime
because opentelemetry 0.12 pulls in tokio 1.0 while actix-web stable is still on 0.*