Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    matrixbot
    @matrixbot
    mrkeen Hi there,
    I am experimenting with node exporter dashboards for a few hosts. Unfortunately we have different os versions on the targets and I see data is missing from them here and there.
    Is there a good approach to somehow align different version of exporters that are coming from different repositories?
    mrkeen Should I deploy the latest available manually?
    Marco Boffi
    @marco.boffi.eni_gitlab
    table.png

    I have two set of Prometheus metrics related two different event types but with the same meaning (event start_time, event stop_time, event status[ok/ko] ). Label of event name is different (eventA, eventB).
    I made a table panel merging the two set of metrics and renamed them with the same labels but columns remain different for the two set of metrics (see image in attachment).
    In previuos version of Grafana (6.x), it seemed to me that colums with the same name were merged together.

    How can i obtain correct view?

    matrixbot
    @matrixbot
    Siavash Hello, we upgrade Loki to 2.0.0 but we're hitting this bug https://github.com/golang/go/wiki/LinuxKernelSignalVectorBug
    Siavash > docker run --rm -it grafana/loki:2.0.0 --version
    loki, version 2.0.0 (branch: HEAD, revision: 6978ee5d7)
    build user: root@009109478b32
    build date: 2020-10-26T15:53:09Z
    go version: go1.14.2
    platform: linux/amd64
    Siavash I see a change on master to switch to Go 1.15, are you going to release 2.0.1 by any chance?
    matrixbot
    @matrixbot
    Siavash I ended up building tag v2.0.0 using Golang 1.15.5
    Siavash But we hit another panic issue
    Siavash I didn't expect 2.0.0 to be so unstable after 21 days with no patch releases
    Siavash Anyway building the image from master seems to work so far.
    alexgiesa
    @Alex77g
    Hello together, i wonder if its possible to send grafana alerts to a rest api after checking a response. thx in advance
    usamaB
    @usamaB

    Hi guys,
    I want to filter an app based on log level.
    I'm doing something like

    {app="some-service"} |= `"level":"ERROR"`

    because
    {app="some-service", level="ERROR"}
    doesn't work.

    the level label is part of parsed fields. How to filter on that?

    usamaB
    @usamaB

    Another question I have with a query Migrating from DataDog to prometheus.
    I'm unable to find the pct_change functionality available in DD in prometheus.

    I tried using (A-B)/B or A/B but they don't work. not appropriate results.

    e.g.

    DD
    pct_change(avg(last_1h),last_1h):avg:default.burrow_kafka_consumer_lag_total{consumer_group IN (connect-.*)} by {consumer_group} > 300
    
    PromQL
    avg(avg_over_time(burrow_kafka_consumer_lag_total{consumer_group=~"connect-.*"}[1h]) / (avg_over_time(burrow_kafka_consumer_lag_total{consumer_group=~"connect-.*"}[1h] offset 1h))) by (consumer_group) > 3

    But it's not correct. The alert fires very often. Any leads on this would be appreciated

    Stephen Kelly
    @steveire
    Hi - i have a simple database table with (name, timestamp, value) columns. I can create a panel with a query for each distinct name and add it to a dashboard, but can I somehow have grafana determine the names and generate the panels, instead of me creating/maintaining them directly
    ?
    It looks like one way to do it is to generate json and somehow set the generated json as the model for a dashboard? Is there some easier way? Or some way that doesn't require me to run such a json generation task in a cron job externally?
    Stephen Kelly
    @steveire
    I figured out how to make the repeat feature work. I still don't know how I did it!
    kalidasya
    @kalidasya
    hey all, is this a good place to ask about plugin development? I am working on a plugin and I want to bundle a datasource and a panel together. from the templates and examples its clear how to do it separately, but can I combine them? it seems to me the app plugin might be for something different. in the module.ts in the examples it exports an instance of the plugin (and even calling some functions on it, like setConfigEditor and setQueryEditor or for the panel the setPanelOptions) does anyone know how to do this properly?
    and kudos for the dev env, it was amazingly easy to set up. if I were not such a noob in typescript I guess it would had been smooth sailing :)
    Dasanko
    @Dasanko
    Hi, does anyone know where can the requirements for the Graph panel be found? My Loki logs are displayed just fine in the Logs panel, but most others say "No data" or "Unable to graph data" - the information is being received OK in the Query inspector, too.
    Dasanko
    @Dasanko
    I found the answer to my question here: grafana/grafana#28259
    deknos82
    @deknos82:matrix.org
    [m]
    Hi, is loki built for developers using it debugging their microservices? perhaps when they look for new errors, or information in the output of their microservices?
    fabio-silva
    @fabio-silva
    Is it normal that unit tests spike CPU usage to the top?
    Alex
    @moijes12
    Hi, I have a problem which I was hoping someone could please help me with. I am trying to deploy the kube-prometheus-stack. I have added it as a dependency in the Chart.yaml and installed the chart. I have also configured an ingress rule to route the /grafana/?(.*) path to the service solutions-helm-grafana at port 80. However, when I try open /grafana/ in the browser, it returns a 404 after redirecting to /login. What templates do I need to add to successfully deploy ?
    nighOz
    @nigh:thenigh.com
    [m]

    Hi all, question on visualizing prometheus histograms. I've gotten it working to where I can see the histogram using a heatmap; however, it looks like the prometheus histograms are cumulative. So instead of seeing a sparse distribution of where my data falls, it turns into a bar chart.

    Any chance I'm missing something?

    nighOz
    @nigh:thenigh.com
    [m]
    After creating a 'SparseHistogram' type in prom-client
    CromFr (Thibaut CHARLES)
    @CromFr:matrix.org
    [m]
    Hi o/
    I'm a RPG server admin and I'm currently using grafana + graphite for storing time series for monitoring the Linux server load, connected player count and other things. I'm also using graphite for monitoring the duration of certain tasks like completing a dungeon, however those time series have very few data points (like 1/day) and feels very inefficient for graphite.
    I'm relatively new to grafana & related databases. I think Loki or Elasticsearch databases would be a better fit for those "event-triggered" data points. Am I correct?
    mephisto
    @mephisto:mephis.to
    [m]
    What do you mean by event triggered?.I would differentiate it more into textual and numeric values you want to save
    1 reply
    I world throw Logfiles at elasticsearch or loki and mainly numeric values at graphite or influxdb
    (I personally prefer influxdb)
    If you want to extract numeric values from those events only you could preprocess it with telegraf for timeseries db or logstash if you want to put it to tagged in a document database like ES
    I world have a look at solr, too. ES got cheesy in Terms of their license
    mephisto
    @mephisto:mephis.to
    [m]
    And what triggers the event? Influx is well suitable for push metrics..
    Would be perfect to get in on some kind of message broker like zmq or mqtt
    Then real-time via telegraf to influxdb
    CromFr (Thibaut CHARLES)
    @CromFr:matrix.org
    [m]
    datapoints are sent to graphite by the game server (using UDP) when a player reaches the end of a dungeon, which isn't something that happens very often (a few times a week). Graphite pre-allocates the storage for the maximum amount of datapoints (one every minute, depending on the configured policy), and fills them with null values since I almost never send datapoints. This feels very inefficient and I guess that requesting graph data will go through (or send) every null value.
    also in some cases I would like to push multiple data points within the same minute, and graphite only keeps the last value received. I can probably configure graphite to average the received values, but I think graphite isn't the best tool for this job.
    for context, here's what the dungeon time series currently look like: https://stats.lcda-nwn2.fr/dashboard/snapshot/AJXTTPqyHPbAQV3FTB0R1Br2YjQvSNrn?orgId=0 It takes several seconds to display everything, and sometimes some times series don't load (I guess it times out while requesting data)
    CromFr (Thibaut CHARLES)
    @CromFr:matrix.org
    [m]
    How does InfluxDB stores the metrics? I will switch to it if it can store / process efficiently "scarce" time series
    CromFr (Thibaut CHARLES)
    @CromFr:matrix.org
    [m]
    :point_up: Edit: How does InfluxDB stores the metrics? I can switch to it if it can store / process efficiently "scarce" time series
    :point_up: Edit: How does InfluxDB stores the metrics? I can switch to it if it let me store / process efficiently those "scarce" time series
    mephisto
    @mephisto:mephis.to
    [m]
    influxdb stores data schemaless. the "primary key" if always the timestamp (ns). then you have tag values and the normal values. tag values are somewhat descriptive like "datacenter" or "rack" or "hostname", the values are the pure values
    the data is just writte when it comes
    CromFr (Thibaut CHARLES)
    @CromFr:matrix.org
    [m]
    I'll give InfluxDB a shot then ! thanks a lot for the pointers ! :)
    mephisto
    @mephisto:mephis.to
    [m]
    np