These are chat archives for ManageIQ/manageiq/performance

16th
Nov 2015
Alex Krzos
@akrzos
Nov 16 2015 15:32
@kbrock on the scale setup I tuned it down to a single priority worker, no generic workers and memory growth has seemed to stabilize now around 6GiB on the priority worker
Keenan Brock
@kbrock
Nov 16 2015 15:50
cool
Alex Krzos
@akrzos
Nov 16 2015 15:51
Thats a lot of memory to burn on a single worker
Keenan Brock
@kbrock
Nov 16 2015 16:01
those power hungry workers
Jason Frey
@Fryguy
Nov 16 2015 16:17
@akrzos Can you trim the log down to the queue items that the priority worker is running
might help narrow down what is exactly is causing the priority worker spike
in fact, if you could chart memory of the priority worker by time we can correlate to which queue item was running when it spikes up
Alex Krzos
@akrzos
Nov 16 2015 17:00
@Fryguy yes should be easy to do so, Might actually be easier if I add instrumentation around memory usage after each message processed
I have grafana displaying overall appliance and the sum of all ruby processes memory usage, however individual worker memory usage won't be solved until I migrate scale lab to 5.5 with the worker names in proctitle
Alex Krzos
@akrzos
Nov 16 2015 17:11
5.4.3.1-appliance_memory.png
5.5.0.9-beta2-appliance_memory.png
5.4.3.1 vs 5.5.0.9 for 8 hour memory baseline
The rapid exiting and memory usage of the rhevm collector workers drops after some time (~1hr)
I assume the difference is due to when the delete or purge job for rhevm runs