Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 15:16
    zheyiyi labeled #1996
  • 15:16
    zheyiyi opened #1996
  • 02:36
    lmolkova labeled #832
  • 02:36
    lmolkova opened #832
  • Dec 10 23:43
    dsborets commented #808
  • Dec 10 21:39
    lmolkova edited #831
  • Dec 10 21:39
    lmolkova labeled #831
  • Dec 10 21:39
    lmolkova opened #831
  • Dec 10 08:53
    eqinox76 labeled #1187
  • Dec 10 08:53
    eqinox76 opened #1187
  • Dec 10 00:19
    philicious commented #1992
  • Dec 09 09:08
    klaasjanelzinga commented #807
  • Dec 08 14:34
    klaasjanelzinga commented #807
  • Dec 08 14:31
    klaasjanelzinga commented #807
  • Dec 08 14:30
    klaasjanelzinga commented #807
  • Dec 08 14:30
    klaasjanelzinga commented #807
  • Dec 08 14:07
    klaasjanelzinga commented #807
  • Dec 08 14:06
    klaasjanelzinga commented #807
  • Dec 08 14:06
    klaasjanelzinga commented #807
  • Dec 08 14:05
    klaasjanelzinga commented #807
ET
@evantorrie
Hi: I'm trying to use a Distribution aggregate for server latency in a View that gets exported. But I don't understand what use the min and max values really are in the distribution. Are they min and max since the server started? How does one compute a min/max over a time period rather than the absolute min/max since the server startup?
Paulo Janotti
@pjanotti
@jmichalek132 further development of tail sampling is happening on OpenTelemetry Collector, see open-telemetry/opentelemetry-collector#408 for example.
Renan Nunes Steinck
@RenanAlonkin
Hi, I have a question. I've found a solution to my own issue (#816), and I want to create a PR, but I'm having issues to push my branch to create the PR. Do I need some kind of special permission?
Carlos Macasaet
@l0s
Hi, may I request a review of #1990? It resolves issue #1400 and all automated checks pass.
Michka Popoff
@iMichka
Hi. I am trying to use opencensus metrics in Python, and to export that data to azure. The example is working (https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#metrics), but I have troubles to understand how opencensus is supposed to work. The metrics are exported to the azure logs, but multiple times, depending on the time.sleep() value. I also played with the export_interval parameter of the metrics exporter, and reduced it: if I set it to 1 second, and have time.sleep of 60 seconds, my metrics are inserted 60 times into the azure logs
What I want to achieve is to avoid using time.sleep(), and make a single entry of my value. I want just to send some values to the "customMetrics" logs in azure from time to time, in a sort of one-shot mode. Is that possible with opencensus and if yes; how can I achieve that?
Michka Popoff
@iMichka
I was using this SDK before: https://github.com/microsoft/ApplicationInsights-Python, which is now archived, and which recommends to use opencensus. It had the following way of working:
from applicationinsights import TelemetryClient tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>') tc.track_metric('My Metric', 42) tc.flush()
Basically, I am trying to write the equivalent code with opencensus
Marwan Sulaiman
@marwan-at-work
anyone here know how i can hook up the OpenCensus-go runmetrics to a view.View? Or more accurately, to an exporter that only has ExportView and not using the new metric package? Thanks!
Ram Thiru
@ramthi

Hi. I am trying to use opencensus metrics in Python, and to export that data to azure. The example is working (https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#metrics), but I have troubles to understand how opencensus is supposed to work. The metrics are exported to the azure logs, but multiple times, depending on the time.sleep() value. I also played with the export_interval parameter of the metrics exporter, and reduced it: if I set it to 1 second, and have time.sleep of 60 seconds, my metrics are inserted 60 times into the azure logs

@iMichka have you had a chance to look at our documentation that describes how to instrument and send metrics ? https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python#metrics [3rd bullet in that section gives you a full sample that will send a metric for your application].

Leighton Chen
@lzchen
@iMichka The SDK uses a background thread to export your metrics to Azure Monitor using a fixed interval (exporter_interval as you have found). Currently, the ability to export all metrics in queue before the application finishes is not implemented. This is why you must use time.sleep to keep the application running until the export thread hits its next cycle.
Michka Popoff
@iMichka
@ramthi @lzchen Yes I have read the doc and played around with the examples. I did discover/understand better the background thread implementation yesterday, as when used with uwsgi and forked processes, that thread is not being copied (unless you tell uwsgi to do so). See https://github.com/census-instrumentation/opencensus-python/issues/660#issuecomment-555055824. The uwsgi issues is solved for me as I found how to correctly fork the threads.
The time.sleep() is actually not needed in my case: as long as my Flask server is up and running, the logs and metrics are being sent by opencensus. So I got rid of the time.sleep() as it works nicely
The last unsolved issue that I have is that I did not find a way to send a metric only once: as long as I let the app run, it will send data continuously to azure.
Michka Popoff
@iMichka
One solution would be to set the export_interval value to something really big, but it does not feel right
Maybe I can .close() / .flush() the AzureLogHandler. I am reading the source code to try to understand how to stop the queue once the metrics/logs has been sent
Michka Popoff
@iMichka
It looks like it does stop to send data at one point, but I have duplicated log entries. So it is working as expected. Maybe it's me doing something silly
Michka Popoff
@iMichka
Ok the duplicated log entries issue is solved, this was on my side. The AzureLogHandler works as it should in Python
Remains the metrics question: it looks like the exporter thread here is never stopped; which explains why it is endlessly sending metrics to azure: https://github.com/census-instrumentation/opencensus-python/blob/e9129b74681df204c027702360be20fc0eea7f76/contrib/opencensus-ext-azure/opencensus/ext/azure/metrics_exporter/__init__.py#L235-L237
Leighton Chen
@lzchen
@iMichka Thanks for taking the time to investigate this issue! Yes the metrics exporter was designed to continuosly poll and send telemetry.
Michka Popoff
@iMichka
@lzchen So there is no way right now to send a metric only once? The same metric value is sent over and over. But maybe this is just the opencensus philosophy and does not fit with what was previously done in the Azure Python library. This will require big architecture changes in some projects, for people migrating to opencensus.
I could use a really small export_interval, like 0,1 seconds, but this will fill the logs quite fast. I planned to use the LastValueAggregation, to always get an unique value: but if my measured values happen below my 0,1 seconds, I will miss measurements as only the last one is kept. An alternative is to use the count/sum/histogram aggregators, but I wanted to let azure insights do the aggregation for me, and store raw metrics (without aggregation) in my azure logs
Michka Popoff
@iMichka
I think that what I might need to do is to hijack the AzureLogHandler to write to "MetricData" instead of "MessageData", by providing a new envelope for metrics within the AzureLogHandler . Because the AzureLogHandler does exactly what I need, with his FIFO queue.
Michka Popoff
@iMichka
Would this make sense? Would you accept a pull request for this? Do you think it is doable (still reading the code so I need to understand how difficult that change would be)?
Michka Popoff
@iMichka
On the other side I have the feeling that all changes should go into the opentelemetry repo, as this the new library that is being worked on right now
Michka Popoff
@iMichka
I just made a quick and dirty hack: census-instrumentation/opencensus-python#820. I opened a pull request because it is easier to discuss around code. Let me know how this could fit somewhere, and how this fits within the OpenTelemtry project?
Leighton Chen
@lzchen
@iMichka Thanks for taking the time to look into this. Yes the above points you raised are correct, there is no way to send a metric value only once. In the OpenTelemetry project, we have the concept of a Gauge metric, in which you set the value of it using a callback. However, the exporting mechanism will probably not change (exporting every X intervals). Your use case does not seem to be one that is very "metric-like". You do not pre-aggregations AND you are not sending the data point over a fixed interval. May I suggest simply using "MessageData" instead of "MetricData"? Is there some reason you need to use the MetricData type?
Michka Popoff
@iMichka
If I understand how Azure Insights work, it reads from the "customMetrics" logs. So if I want to display the metrics in the insight graphs, I need the data there. Using "MessageData" sends the logs to the "traces" part in Azure.
insights.png
Here is a screenshot of where I would like my metrics to be stored
Leighton Chen
@lzchen
@iMichka What insight graph are you referring to?
Michka Popoff
@iMichka
Dasboard.jpg
It's the "Dasboard" page from Azure
Metrics.PNG
Michka Popoff
@iMichka
The graps are built on top of the "Metrics" widgets on that page
For example, each time a sign-in transaction occurs on your app, you publish a metric to Azure Monitor with only a single measurement. So for a sign-in transaction that took 12 ms, the metric publication would be as follows ...
Marwan Sulaiman
@marwan-at-work
@iMichka I've had the same struggle with datadog (which expects the data to be non-monotonic), there's been a long discussion here and here:
census-instrumentation/opencensus-go#1181
census-instrumentation/opencensus-go#1182
I have a fork that resets the data on every Flush, which is what im using in production right now until either the PR is resolved, or OpenTelemetry takes over: https://github.com/marwan-at-work/opencensus-go/commit/1380fae97d1e9d7e7d96593154699df45bfb1b7d#diff-2000ebb97830a1f0f1c5c4856a737f78R236
Michka Popoff
@iMichka
Thanks; looks like the same issue. I wrote down a workaround in my PR (https://github.com/census-instrumentation/opencensus-python/pull/820#issuecomment-558658769) which I am now using in production. This will do until opentelemetry is out. I'll come back to this when the first stable releases of openTelemetry are out and see what can be done within the new framework.
Mick Davies
@MickDavies
Hi, I am new to OpenCenus and am trying to work out how I can trace a jobs that initialise stream processing. I want to measure the cumulative times for the transformations of each element as it passes through the stream and relate this back to the initial job. Is there a way of doing this?
Mick Davies
@MickDavies
I just posted the above to mailing list
bsr203
@bsr203
@pjanotti mentioned above that further development of tail sampling is moved to OpenTelemetry. Even if it skews the chart, is there a way to capture all traces with a span in error. Or capture all and drop if no error (sample before exporting). I am new to open census and previously using opentracing. Thanks for your help.
Pirogov Alexey
@AlexeyPirogov
Hello. I successfully use OpenCensus with Zipkin. Considering moving metrics that I have to OpenCensus also. Is there are a way to integrate Grafana(or other metrics dashboard) with Zipkin (or other Distributing Tracing)? So, I can click on metric max value and navigate to particular TraceId in Zipkin?
I understand that metrics and tracings use different data model to save to disk. But it would amazing to have such linkage.
Ahmed ElRefaey Hamouda
@montaro
Hello Guys, is there anyway in OpenCensus to delete a stat/metric? (go-client)
Mark Grand
@mgrand
I am a nube trying to get OpenCensus Collector to run. It immediately exits after logging this message:
{"level":"warn","ts":1575637911.8632534,"caller":"collector/processors.go:279","msg":"Nothing to do: no processor was enabled. Shutting down."}
What is it trying to tell me?
The command line I am using is
./occollector_linux --config config.yaml --health-check-http-port 8008 --metrics-port 8888 --zpages-http-port 55679
The contents of config.yaml is
log-level: INFO

receivers:
  opencensus:
    port: 8080

exporters:
  wavefront:
    enable_traces: true

    # One of "proxy" or "direct_ingestion" is required
    proxy:
      Host: metrics-dev.ioq1.homedepot.com
      MetricsPort: 4001
      TracingPort: 2878
#      DistributionPort: wf_distribution_port  # number

zpages:
  port: 55679
  disabled: false
Jonathan Giles
@JonathanGiles
Hi folks. Curious if there were any plans to have a module name included in the released jar files for OpenCensus?
Zhan Su
@z-oo
Hi, I wonder whether OpenCensus have plan to make tracing library automatically exporting latency/qps/etc metrics? When we add a new span, we almost always also care about the latency distribution. We can get the distribution from the trace storage backend but it is biased because sampling decision is made in upstream services. We now have to write a wrapper that creates both tracer and also a scoped timer to measure time and report to stats library. Then we are paying 2x cost of measuring the same time span twice and also a little bit extra coding work is needed. I personally care about C++ Python and golang.