These are chat archives for ManageIQ/manageiq/performance

14th
Oct 2017
Keenan Brock
@kbrock
Oct 14 2017 00:32
@Fryguy sorry - conflated. it is called near a cache cleanout - but think it is not the culpret
@dmetzger57 so the generic worker approaches 6GB RSS (5GB PSS) - and then does not go above that. I was under the impression that it kept growing
Dennis Metzger
@dmetzger57
Oct 14 2017 01:42
I haven’t seen anyone with a Generic configured with more than a 4Gb threshold.
So you’re running one and it grows to a stable 5Gb+?
what the heck is it holding that takes 5Gb of memory?
Jason Frey
@Fryguy
Oct 14 2017 01:45
5,000,000,000 bytes
:trollface:
@dmetzger57 did we ever get the customer's evm server size?
Dennis Metzger
@dmetzger57
Oct 14 2017 01:46
No, kept getting “the answer is coming"
Jason Frey
@Fryguy
Oct 14 2017 01:46
:/
Dennis Metzger
@dmetzger57
Oct 14 2017 01:46
so is Christmas
I’ve not seen the Generic worker bloat unless Metrics Collection / Processing was enabled, wonder what aspect of metrics is making the poor Generic worker collateral damage.
Jason Frey
@Fryguy
Oct 14 2017 01:52
that's ye olde perf_capture_timer
the scheduler puts perf capture timer on the queue for the ems_metrics_coordinator role
and that runs on the generic worker
only other thing it could be is purging, but that's all in SQL, so I can't see how purging can bloat the memory
well, purging pulls back the ids, but there's no way that many ids come back :)
Dennis Metzger
@dmetzger57
Oct 14 2017 01:54
$5 on perf_capture_timer
Jason Frey
@Fryguy
Oct 14 2017 01:54
my money is on it too
Dennis Metzger
@dmetzger57
Oct 14 2017 01:58
once again, metrics for the win
Keenan Brock
@kbrock
Oct 14 2017 13:17
@Fryguy Does VmdbDatabase.capture_metrics_timer run only for metrics capture, or does it run on regular generic workers?
I have a list of jobs that where run. Curious which ones we think are culprets [gist]
Am curious if the metric purge ones are a little guilty too
There is a cache that drops the old objects down to 0. That one takes up a constant 1.5million objects (not sure ram) Then there are a bunch of live object spikes which I think is just there because we're not running gc because there is so much extra space (no need to gc if there is a bunch of memory to spare - rss is very high)
Jason Frey
@Fryguy
Oct 14 2017 16:52
VmdbDatabase.capture_metrics_timer is completely independent of C&U...it runs on a generic worker.

There is a cache that drops the old objects down to 0.

Have you identified which cache?