./target/debug/deps/sha2-0dbd11ac79fa6266.gcno:version '408*', prefer 'A93*'
find: ‘gcov’ terminated by signal 11
hey all, i am running this example here
fn init_meter() -> metrics::Result<PushController> {
let export_config = ExporterConfig {
endpoint: "http://localhost:4317".to_string(),
protocol: Protocol::Grpc,
..ExporterConfig::default()
};
opentelemetry_otlp::new_metrics_pipeline(tokio::spawn, delayed_interval)
.with_export_config(export_config)
.with_aggregator_selector(selectors::simple::Selector::Exact)
.build()
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
let _started = init_meter()?;
let meter = global::meter("test2");
let value_counter = meter.u64_counter("blah9").init();
let labels = vec![KeyValue::new("key1","val1")];
for i in 0..100{
let j = i%4;
println!("{} {}",i,j);
let mut labels = vec![];//labels.clone();
let kv = match j{
0=> KeyValue::new("method","GET"),
1=> KeyValue::new("method","POST"),
2=> KeyValue::new("method","PUT"),
3=> KeyValue::new("method","DELETE"),
_=> KeyValue::new("method","HEAD"),
};
labels.push(kv);
// labels.push(KeyValue::new("key4",j.to_string()));
value_counter.add(1, &labels);
tokio::time::sleep(Duration::from_secs(1)).await;
}
// wait for 1 minutes so that we could see metrics being pushed via OTLP every 10 seconds.
tokio::time::sleep(Duration::from_secs(60)).await;
shutdown_tracer_provider();
Ok(())
}
At the end, in Prometheus dashboard I am getting
agent_blah9{collector="pf", instance="otel-agent:8889", job="otel-collector", method="PUT", type="docker"} 25
where I would expect
agent_blah9{collector="pf", instance="otel-agent:8889", job="otel-collector", method="PUT", type="docker"} 25
agent_blah9{collector="pf", instance="otel-agent:8889", job="otel-collector", method="POST", type="docker"} 25
agent_blah9{collector="pf", instance="otel-agent:8889", job="otel-collector", method="GET", type="docker"} 25
agent_blah9{collector="pf", instance="otel-agent:8889", job="otel-collector", method="DELETE", type="docker"} 25
Any ideas ? I had a look at agent opentelemetry-agent logs and there seems to be only one type of metric there
Data point labels:
-> method: PUT
error: failed to run custom build command for `opentelemetry-otlp v0.5.0`
965
966 Caused by:
967 process didn't exit successfully: `/.../target/release/build/opentelemetry-otlp-13bb7928af03e4cb/build-script-build` (exit code: 101)
968 --- stderr
969 thread 'main' panicked at 'Error generating protobuf: Os { code: 2, kind: NotFound, message: "No such file or directory" }', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/opentelemetry-otlp-0.5.0/build.rs:28:10
970
withContext
) and the propagator set. My other Go service thus cannot join spans correctly. Anyone experience and/or have an idea about this? Thanks!
diff --git a/examples/basic-otlp/src/main.rs b/examples/basic-otlp/src/main.rs
index 51ac886..6a981ac 100644
--- a/examples/basic-otlp/src/main.rs
+++ b/examples/basic-otlp/src/main.rs
@@ -51,6 +51,7 @@ lazy_static::lazy_static! {
];
}
+
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
let _ = init_tracer()?;
@@ -59,7 +60,8 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
let tracer = global::tracer("ex.com/basic");
let meter = global::meter("ex.com/basic");
- let one_metric_callback = |res: ObserverResult<f64>| res.observe(1.0, COMMON_LABELS.as_ref());
+ let mut f64_metric_val: f64 = 1.0;
+ let one_metric_callback = |res: ObserverResult<f64>| res.observe(f64_metric_val, COMMON_LABELS.as_ref());
let _ = meter
.f64_value_observer("ex.com.one", one_metric_callback)
.with_description("A ValueObserver set to 1.0")
@@ -101,8 +103,15 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
});
});
- // wait for 1 minutes so that we could see metrics being pushed via OTLP every 10 seconds.
- tokio::time::sleep(Duration::from_secs(60)).await;
+ let mut count = 0u32;
+ loop {
+ tokio::time::sleep(Duration::from_secs(10)).await;
+ count += 1;
+ f64_metric_val += 1.0;
+ if count == 6 {
+ break;
+ }
+ }
shutdown_tracer_provider();
Hi, I am trying out the tracing api with local jaeger instance,
only a short lived #[instrument] appears in jaeger ui, and ui shows no unfinished span or event.
Even this simple example does not work; it only shows 'doing_work' span and no event at all. what is wrong?
fn main() {
opentelemetry::global::set_text_map_propagator(opentelemetry_jaeger::Propagator::new());
let tracer = opentelemetry_jaeger::new_pipeline()
.install_simple()
.unwrap();
use opentelemetry::trace::Tracer;
tracing::error!("first error event");
tracer.in_span("doing_work", |cx| {
tracing::error!("nested error event");
tracing::info!("nested info event");
tracing::warn!("nested warn event");
tracing::debug!("nested debug event");
tracing::trace!("nested trace event");
});
opentelemetry::global::shutdown_tracer_provider();
}
in_span
calls to https://docs.rs/tracing/0.1.25/tracing/span/struct.Span.html#method.in_scope and you should see both your spans and the logs
"first error event"
log won't appear there)
Thanks, didn't realize the tokio-tracing and otel APIs could conflict.
Now I am left with an incomplete span problem. For example, in the code below, jaeger shows async_fn2
(child) with its parent span id, but the parent span itself is missing.
#[instrument]
async fn async_fn2() {
tracing::info!("enter2");
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
tracing::info!("exit2");
}
#[instrument]
async fn async_fn() {
tracing::info!("enter");
async_fn2().await;
// panic!(); // <- this flushes all spans
tokio::time::sleep(std::time::Duration::from_secs(1000)).await;
tracing::info!("exit");
}
#[tokio::main]
async fn main() -> Result<()> {
let tracer = opentelemetry_jaeger::new_pipeline()
.with_service_name("timer")
.install_simple()?;
tracing_subscriber::registry()
.with(tracing_opentelemetry::subscriber().with_tracer(tracer))
.try_init()?;
async_fn().await;
Ok(())
}
Is there a way to flush long-running span periodically? Is this a limitation of jaeger, opentelemetry interface, or rust implementation?
It would be great to have a 'growing' span: the span's name and start time is immediately visible in jaeger (batching delay is ok), and its end time and additional log/context is updated over time.
with_current_context()
to simply attach the currently active context. https://docs.rs/opentelemetry/0.13.0/opentelemetry/trace/trait.FutureExt.html#method.with_current_context
with_span(span_name, fn)
that takes a closure or something similar and traces it?
TcpStream::connect(proxy_addr).with_span("connect").await?;
inside an async function - and the result would be that there would be a "connect" span which is entered when that call starts, and exited when that call finishes, and if the Result from that call is Err, then the Err is recorded in to the span - and if the Result is Ok, then 200 OK is recorded as the status.
That can all be written in a single line with
global::tracer("my-component").start("span-name");
And there are also other quality of life functions like in_span
which executes a closure the same way you asked for it before:
https://docs.rs/opentelemetry/0.13.0/opentelemetry/trace/trait.Tracer.html#method.in_span
global::tracer("my-component").in_span("span-name", |_cx| {
// anything happening in functions we call can still access the active span...
my_other_function();
})
I don't think it is fair to the maintainers to complain about the maturity of the library in a 0.13.0
pre-release version. If you are lacking functions then I am sure they would appreciate a contribution or issue to see if others are also interested in such a function.
tracing-opentelemetry
to have a nicer customer interface. Feel free to open a feature request and we can have some discussion there.