Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • Jan 18 15:31
    rajcspsg commented #971
  • Jan 17 21:11
    lzchen unassigned #809
  • Jan 17 03:05
    lezh commented #430
  • Jan 17 02:47
    lezh opened #430
  • Jan 16 22:54
    dsborets commented #808
  • Jan 16 22:27
    tmc commented #808
  • Jan 16 22:13
    lzchen synchronize #827
  • Jan 16 21:55
    lzchen synchronize #827
  • Jan 16 21:45
    lzchen synchronize #827
  • Jan 16 18:44
    ctaggart commented #368
  • Jan 16 18:41
    ctaggart labeled #846
  • Jan 16 18:41
    ctaggart opened #846
  • Jan 15 23:43
    lzchen synchronize #845
  • Jan 15 23:32
    lzchen synchronize #845
  • Jan 15 23:22
    googlebot labeled #845
  • Jan 15 23:22
    lzchen opened #845
  • Jan 15 23:18
    lzchen assigned #809
  • Jan 15 23:17
    lzchen closed #834
  • Jan 15 23:17
    lzchen commented #834
  • Jan 15 23:16

    lzchen on master

    Add custom_dimensions using kwa… (compare)

Michka Popoff
I think that what I might need to do is to hijack the AzureLogHandler to write to "MetricData" instead of "MessageData", by providing a new envelope for metrics within the AzureLogHandler . Because the AzureLogHandler does exactly what I need, with his FIFO queue.
Michka Popoff
Would this make sense? Would you accept a pull request for this? Do you think it is doable (still reading the code so I need to understand how difficult that change would be)?
Michka Popoff
On the other side I have the feeling that all changes should go into the opentelemetry repo, as this the new library that is being worked on right now
Michka Popoff
I just made a quick and dirty hack: census-instrumentation/opencensus-python#820. I opened a pull request because it is easier to discuss around code. Let me know how this could fit somewhere, and how this fits within the OpenTelemtry project?
Leighton Chen
@iMichka Thanks for taking the time to look into this. Yes the above points you raised are correct, there is no way to send a metric value only once. In the OpenTelemetry project, we have the concept of a Gauge metric, in which you set the value of it using a callback. However, the exporting mechanism will probably not change (exporting every X intervals). Your use case does not seem to be one that is very "metric-like". You do not pre-aggregations AND you are not sending the data point over a fixed interval. May I suggest simply using "MessageData" instead of "MetricData"? Is there some reason you need to use the MetricData type?
Michka Popoff
If I understand how Azure Insights work, it reads from the "customMetrics" logs. So if I want to display the metrics in the insight graphs, I need the data there. Using "MessageData" sends the logs to the "traces" part in Azure.
Here is a screenshot of where I would like my metrics to be stored
Leighton Chen
@iMichka What insight graph are you referring to?
Michka Popoff
It's the "Dasboard" page from Azure
Michka Popoff
The graps are built on top of the "Metrics" widgets on that page
For example, each time a sign-in transaction occurs on your app, you publish a metric to Azure Monitor with only a single measurement. So for a sign-in transaction that took 12 ms, the metric publication would be as follows ...
Marwan Sulaiman
@iMichka I've had the same struggle with datadog (which expects the data to be non-monotonic), there's been a long discussion here and here:
I have a fork that resets the data on every Flush, which is what im using in production right now until either the PR is resolved, or OpenTelemetry takes over: https://github.com/marwan-at-work/opencensus-go/commit/1380fae97d1e9d7e7d96593154699df45bfb1b7d#diff-2000ebb97830a1f0f1c5c4856a737f78R236
Michka Popoff
Thanks; looks like the same issue. I wrote down a workaround in my PR (https://github.com/census-instrumentation/opencensus-python/pull/820#issuecomment-558658769) which I am now using in production. This will do until opentelemetry is out. I'll come back to this when the first stable releases of openTelemetry are out and see what can be done within the new framework.
Mick Davies
Hi, I am new to OpenCenus and am trying to work out how I can trace a jobs that initialise stream processing. I want to measure the cumulative times for the transformations of each element as it passes through the stream and relate this back to the initial job. Is there a way of doing this?
Mick Davies
I just posted the above to mailing list
@pjanotti mentioned above that further development of tail sampling is moved to OpenTelemetry. Even if it skews the chart, is there a way to capture all traces with a span in error. Or capture all and drop if no error (sample before exporting). I am new to open census and previously using opentracing. Thanks for your help.
Pirogov Alexey
Hello. I successfully use OpenCensus with Zipkin. Considering moving metrics that I have to OpenCensus also. Is there are a way to integrate Grafana(or other metrics dashboard) with Zipkin (or other Distributing Tracing)? So, I can click on metric max value and navigate to particular TraceId in Zipkin?
I understand that metrics and tracings use different data model to save to disk. But it would amazing to have such linkage.
Ahmed ElRefaey Hamouda
Hello Guys, is there anyway in OpenCensus to delete a stat/metric? (go-client)
Mark Grand
I am a nube trying to get OpenCensus Collector to run. It immediately exits after logging this message:
{"level":"warn","ts":1575637911.8632534,"caller":"collector/processors.go:279","msg":"Nothing to do: no processor was enabled. Shutting down."}
What is it trying to tell me?
The command line I am using is
./occollector_linux --config config.yaml --health-check-http-port 8008 --metrics-port 8888 --zpages-http-port 55679
The contents of config.yaml is
log-level: INFO

    port: 8080

    enable_traces: true

    # One of "proxy" or "direct_ingestion" is required
      Host: metrics-dev.ioq1.homedepot.com
      MetricsPort: 4001
      TracingPort: 2878
#      DistributionPort: wf_distribution_port  # number

  port: 55679
  disabled: false
Jonathan Giles
Hi folks. Curious if there were any plans to have a module name included in the released jar files for OpenCensus?
Zhan Su
Hi, I wonder whether OpenCensus have plan to make tracing library automatically exporting latency/qps/etc metrics? When we add a new span, we almost always also care about the latency distribution. We can get the distribution from the trace storage backend but it is biased because sampling decision is made in upstream services. We now have to write a wrapper that creates both tracer and also a scoped timer to measure time and report to stats library. Then we are paying 2x cost of measuring the same time span twice and also a little bit extra coding work is needed. I personally care about C++ Python and golang.
Joseph Hajduk
anyone have any luck using opencensus-java or scala inside managed cloud run? I am getting DEADLINE_EXCEEDED grpc errors when the exporter does onBatchExport with the simplest example.
Leighton Chen
Hi, how do I join the census-instrumentation org? https://github.com/orgs/census-instrumentation/people
Tristan Lohman
I'm trying to get started with opencensus and Istio, although I'm pretty new to Istio. I would like to use the OpenCensus agent as we wish to forward to multiple services. There is a ton of documentation on getting envoy to send its generated spans on to other services. We are also generating spans inside our application to trace key operations. What is the recommended deployment scenario here? Do we deploy the agent inside the pod next to the envoy sidecar, or as a DaemonSet? If so, how do we configure our application to point at that collector, and how do tell envoy to allow us to talk to it (my impression being that the standard envoy deployment intercepts ALL traffic from your application, including traffic meant for the agent, is this true?).
Hi, I have plugged Opencensus lib to our Node.js application in order to report to Zipkin
performed call to our service_1 that is exposed outside
service_1 called other external service_2
as result got 1 Trace with 2 Spans in Zipkin UI
but Spans hold the same name despite there were 2 calls of different services
Is there chance to tune remote service naming?
does the oc-collector currently export metrics? based on processors.go:255 it appears not... ?
Bogdan Drutu
it supports everything that oc-collector supported
thanks for the pointer @bogdandrutu - out of curiosity, what's the difference between the open-telemetry and open-census projects?
Bogdan Drutu
open-telemetry is a continuation of the opencensus + opentracing projects. see https://medium.com/opentracing/a-roadmap-to-convergence-b074e5815289
Tamas Szoke
Maximiliano Felice
Hi guys, how are you? I'm trying to extend the Aggregation interface of Opencensus in Java to support Max/Min value reporting, is there any recommended way to do this?
Yuri Grinshteyn
Hi, folks - i'm trying to use OpenTelemetry for tracing in Go. In OpenCensus, I was able to do this:
```// create root span
ctx, rootspan := trace.StartSpan(context.Background(), "incoming call")
defer rootspan.End()
// create child span for backend call
ctx, childspan := trace.StartSpan(ctx, "call to backend")
defer childspan.End()```
To explicitly create child spans
But in OT, I can't figure out how to do that. From the sample, I have this:
``` err := tr.WithSpan(ctx, "incoming call", // root span here
func(ctx context.Context) error {
// create backend request
req, _ := http.NewRequest("GET", backendAddr, nil)
        // inject context
        ctx, req = httptrace.W3C(ctx, req)
        httptrace.Inject(ctx, req) ```
Any suggestions?
Yuri Grinshteyn
My question was answered in #opentelemetry-go
Hi everyone! I've got a question about an metric I'd like to implement in my project. I'm using lastValue aggregation which works great, except for the part that it "persists" the last value recorded perpetually. Are there other aggregations that fit the bill for this? I just want something exactly like lastValue but "resets" the metrics when nothing is sent
hi. apologies if this question was asked already. i am having troubles getting opencensus to increment a counter in stackdriver monitoring. Here is a snippet of the code:
public void record(MetricsType metrics, String appId, Map<MetricsTagKey, String> tags) {

        Assert.notNull(metrics, "You have to pick a metrics type");
        Assert.notEmpty(tags, "Tags can't be null");

        logger.info("== > Recording metrics '{}' for app ID '{}' with tag values {} ", metrics.toString(), appId, tags.toString());

        TagContextBuilder tcb = tagger.emptyBuilder();

        // add appId
        tcb.putLocal(TagKey.create(MetricsTagKey.APP_ID.key), TagValue.create(appId));

        // add all other tags
        tags.forEach((k, v) -> {
            tcb.putLocal(TagKey.create(k.key), TagValue.create(v));

        TagContext ctx = tcb.build();

        if (metrics.equals(MetricsType.EMAILS_SENT_COUNT)) {
            try (Scope sc = tagger.withTagContext(ctx)) {
                STATS_RECORDER.newMeasureMap().put(emailsSentCount, 1L).record();
        } else {
            // log metrics not supported
this is how the measure is initialized:
        emailsSentCount = MeasureLong.create(monitoringConfig.getProperty(EMAILS_SENT_NAME), 
        List<TagKey> emailsSentKeys = new ArrayList<>();
        for (MetricsTagKey v: MetricsTagKey.values()) {

        View emailsSentCountView = View.create(Name.create(monitoringConfig.getProperty(EMAILS_SENT_NAME)),
regardless of how many times I call this function, the stats metrics is stuck at 1 in Stackdriver. what am I doing wrong?