I have an automate debugging tool called object_walker.
I've seen several examples of object_walker taking the Generic/Priority worker process memory over its limit as it runs (CFME 5.6.x), with the resultant worker termination. This obviously hangs and then terminates object_walker.
I've also had a user report the same issue to me on github (object_walker 'hanging').
Now I can understand that as it's traversing the various associations that it's loading more objects into the automation engine, but I was wondering if anything had changed with 5.6 regarding memory? Maybe we just have slightly less headroom now between steady running state and the default memory limit.
to your point re: memory thresholds... they're currently soft thresholds, where we let the worker gracefully exit after it's done... if a worker is constantly getting restarted after minimal work, that threshold is too low
there's some discussion to make these thresholds more strict and give a worker much less time to exit before we kill it... even if it's doing useful work
So, yeah, without it, you'd have to convert your walker script to something you can run in ruby/rails outside of automate to figure out what's blowing up the memory
but loading associations into memory while traversing a tree of objects will certainly consume memory... it might just be a handful of associations that you don't really need, but we won't know until we profile it