now, it's not quite exactly how many unique series are stored in the m3db node cluster completely because of the way data is stored in M3DB (time series are stored inside time based blocks that you configure). So the Ticking graph shows you how many unique series for the most recent block we've persisted. We don't track how many series overall since that changes all the time as series gets introduced and expired at the end of retention and generally, the number of active series is what determines resource usage, not overall number of series
we'll update our FAQ section of our docs to include this moving forward
thanks! @martin-mao very helpful answer! M3db is amazing project.
If you don't mind ,another question ~ I found Tagged RPC panel in grafana , can i do the simple formula WriteTaggedSuccess/Second multiply by 60 multiply by 60 multiply by 24 , then get the approximately number of daily metrics ingested?
no because 1) WriteTaggedSuccess is the number of batches we wrote, not the number of metrics
the number of metrics per batch is variable
you can do something like commitLogWrites per second and apply a moving average to that, but you can only use that to calculate the total number of metric datapoints ingested per second
not the number of unique time series
got it, thank you!
Is there any http batch post endpoint for metric ingest ?
@spd-code there is not, but if you use the m3db go client, it will batch for you or you can go via prom remote write and the coordinator (which just uses the m3db go client) will also batch
Sure, will try that , is there any plan for a post batch endpoint though, that will be useful I think ?
If you’re interested, you could submit a PR adding a batched endpoint; there’s some prior work for an endpoint that writes a single JSON datapoint here, but it’s not super well fleshed out or performant if you want to look at it