These are chat archives for ManageIQ/manageiq/performance

6th
Oct 2015
Matthew Draper
@matthewd
Oct 06 2015 15:28
So, can I help with this memory situation?
@akrzos reading through https://bugzilla.redhat.com/show_bug.cgi?id=1267697.. do we have absolute figures? "total memory used during/after refresh" seems more important than "additional memory used during/after refresh", but these numbers all sound like the latter.
Alex Krzos
@akrzos
Oct 06 2015 15:33
@matthewd I have them all
@matthewd The before is a snapshot of how much memory rails console uses which has changed dramatically between releases
I wrote difference in the bz so we could understand how much more the refresh code or whatever code we run inside the benchmarking changes memory usage
Alex Krzos
@akrzos
Oct 06 2015 15:39
So for instance with the vmware-large provider here is the measurements I collected that the bz was written from:
2015-09-30 16:06:59,346 [I] Iteration: 0, Timing: 462.732221692, Process: 21526 (utils/perf.py:293)
2015-09-30 16:06:59,349 [I] RSS Memory start: 175947776, end: 1485434880, change: 1309487104 (utils/perf.py:295)
2015-09-30 16:06:59,352 [I] GC Count start: 35, end: 171, change: 136 (utils/perf.py:297)
2015-09-30 16:06:59,355 [I] RSS Mem Total(Console + Benchmark) Used: 1416.62109375 MiB (utils/perf.py:299)
2015-09-30 16:06:59,358 [I] RSS Mem Change(Benchmark) Used: 1248.82421875 MiB (utils/perf.py:300)
So total was 1416MiB, start was 167MiB, change was 1248MiB
I am in the middle of adjusting the framework to parse the start/end values and not just the change from now on
Matthew Draper
@matthewd
Oct 06 2015 15:42
Okay, thanks… and I guess we can assume the start value is about the same for all of them
Alex Krzos
@akrzos
Oct 06 2015 15:42
They should be I ran another experiment for @kbrock to see how variable the console/runner from rails is in memory usage
Matthew Draper
@matthewd
Oct 06 2015 15:44
(my only concern about measuring just the diff, is that if we made some change that lowered startup memory by 10MB, say, but 5MB of that got lazy-loaded during the first run, we would appear to be doing worse, when we'd actually improved)
Alex Krzos
@akrzos
Oct 06 2015 15:44
You can also see the growth in runner/console memory between 5.3/5.4/5.5 with that
Joe Rafaniello
@jrafanie
Oct 06 2015 16:01
@akrzos Sorry, I forget, did you confirm that disabling the 2.2 generational GC via export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9 brought the rails console/runner memory totals closer to 5.4 numbers? It went from ~170 MB on master to ~135 MB with that exported for me
not that it's important, but it helps point out that most of the changes in memory footprint is coming from the ruby generational GC
Alex Krzos
@akrzos
Oct 06 2015 16:02
@jrafanie I believe we ran the benchmark on vmware-small and didn't see much of any difference, but I'll re-run with that env variable right now on the 5.5
Joe Rafaniello
@jrafanie
Oct 06 2015 16:07
We can possibly delay load some of the translation files, possibly fix fog from loading the entire world on require, etc. that will decrease the console/runner/base worker memory footprint but we'll still have the issue where high object allocation storms cause ruby 2.2 to keep asking for memory which it never returns to the OS because full GCs aren't happening as often as before with 2.0/1.9.3... that's the tradeoff we're expecting with ruby 2.2

(my only concern about measuring just the diff, is that if we made some change that lowered startup memory by 10MB, say, but 5MB of that got lazy-loaded during the first run, we would appear to be doing worse, when we'd actually improved)

Totally agree... I think the days of disabling eager load / lazy loading are numbered

If we use processes, we need to fork and eager load nearly everything
Matthew Draper
@matthewd
Oct 06 2015 16:13
Do we have a heap dump from somewhere deep inside a refresh?
Alex Krzos
@akrzos
Oct 06 2015 16:15
@jrafanie gotcha on the measurements, I will be tracking both start/end/diff for future measurements of RSS and virt memory
The start/end is there right now, just gleaned in my test frameworks logs, but going forward I'm publishing it in a cleaner way
I would agree that we could blame the the new GC if we saw the behavior reduced to near 5.4 measurements with that GC tuning
I'll try against a larger provider as well
looks like my 5.5.0.1 spare vm is dead, spinning up a 5.5.0.3 to try export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9
Dennis Metzger
@dmetzger57
Oct 06 2015 16:20
i don’t think we know enough about tweaking the 5.5 GC that we can rule it out because we’ve tried one configuration tweak. there are a large number of parameters, so the GC in my mind is still the big gorilla here. just my $.02
Alex Krzos
@akrzos
Oct 06 2015 16:22
Agreed, I also adding GC.stat before/after snapshots to my benchmarks as well, re-running it with all those gathered measurements will help if we start adjusting all of those gc variables
Joe Rafaniello
@jrafanie
Oct 06 2015 16:23

I would agree that we could blame the the new GC if we saw the behavior reduced to near 5.4 measurements with that GC tuning

If they're close to 5.4 measurements (for the same work) within a threshold of tens of MBs to maybe 150 MB, I agree...

Do we have a heap dump from somewhere deep inside a refresh?

@matthewd I don't think we've requested a heap dump from @akrzos AFAIK, although he has been playing with various things so maybe he's done a heap dump for "fun"

Alex Krzos
@akrzos
Oct 06 2015 16:27
I can work on getting a heap dump as well, preference on size of provider?
Joe Rafaniello
@jrafanie
Oct 06 2015 16:27

i don’t think we know enough about tweaking the 5.5 GC that we can rule it out because we’ve tried one configuration tweak. there are a large number of parameters, so the GC in my mind is still the big gorilla here. just my $.02

Yes, clearly this is one we've got to consider as an option as the initial heap allocation and growth by default is for quick ruby scripts

Dennis Metzger
@dmetzger57
Oct 06 2015 16:27
@akrzos does your PTO start tomorrow or Thursday?
Alex Krzos
@akrzos
Oct 06 2015 16:28
@dmetzger57 tomorrow
Matthew Draper
@matthewd
Oct 06 2015 16:28
@akrzos even small is probably sufficient to show anything interesting, I think
Joe Rafaniello
@jrafanie
Oct 06 2015 16:29
@akrzos do you have a list of the memory thresholds you had to tweak to allow your tests to run without the workers being killed due to memory usage?
Dennis Metzger
@dmetzger57
Oct 06 2015 16:29
@akrzos can we get access to you environments (small/medium/large/immense) to test in while your away?
Matthew Draper
@matthewd
Oct 06 2015 16:30
I guess we'd need to find a suitable place to actually do the dump, though… I think we want it late in the refresh — but before it's finished
I suppose GC.stat should confirm that
Alex Krzos
@akrzos
Oct 06 2015 16:33
@dmetzger57 absolutely, why don't I put together an email explaining access
@dmetzger57 for testing purposes will guys be connecting actual appliances or code from your local machines, if it's actual appliances I'd have to make sure you guys can access my rhev environment which hosts all appliances
Alex Krzos
@akrzos
Oct 06 2015 16:43
Ok no noticeable difference with export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9 for console/runner, if anything it's more variable now
I'll check with a console to see what GC.stat says
to make sure export is working
or that the env is picking it up
Compare Columns K/L vs M/N for above in the gdoc comparing console/runner memory usage
Matthew Draper
@matthewd
Oct 06 2015 16:44
No noticable difference compared to the other 5.5 runs? Or compared to 5.4?
diffs of diffs :confused:
Alex Krzos
@akrzos
Oct 06 2015 16:46
5.5 console with export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9:
[root@localhost vmdb]# bundle exec bin/rails c
Loading production environment (Rails 4.2.4)
irb(main):001:0> GC.stat
=> {:count=>65, :heap_allocated_pages=>2443, :heap_sorted_length=>2460, :heap_allocatable_pages=>0, :heap_available_slots=>995791, :heap_live_slots=>882087, :heap_free_slots=>113704, :heap_final_slots=>0, :heap_marked_slots=>214227, :heap_swept_slots=>420637, :heap_eden_pages=>2443, :heap_tomb_pages=>0, :total_allocated_pages=>2849, :total_freed_pages=>406, :total_allocated_objects=>6926159, :total_freed_objects=>6044072, :malloc_increase_bytes=>1167056, :malloc_increase_bytes_limit=>16777216, :minor_gc_count=>1, :major_gc_count=>64, :remembered_wb_unprotected_objects=>9625, :remembered_wb_unprotected_objects_limit=>19512, :old_objects=>194774, :old_objects_limit=>475983, :oldmalloc_increase_bytes=>1167504, :oldmalloc_increase_bytes_limit=>16777216}
note: :minor_gc_count=>1, :major_gc_count=>64
now let me unset
unset:
[root@localhost vmdb]# unset RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR
[root@localhost vmdb]# bundle exec bin/rails c
Loading production environment (Rails 4.2.4)
irb(main):001:0> GC.stat
=> {:count=>36, :heap_allocated_pages=>2126, :heap_sorted_length=>2460, :heap_allocatable_pages=>0, :heap_available_slots=>866572, :heap_live_slots=>866443, :heap_free_slots=>129, :heap_final_slots=>0, :heap_marked_slots=>571944, :heap_swept_slots=>199145, :heap_eden_pages=>2126, :heap_tomb_pages=>0, :total_allocated_pages=>2370, :total_freed_pages=>244, :total_allocated_objects=>6926216, :total_freed_objects=>6059773, :malloc_increase_bytes=>190704, :malloc_increase_bytes_limit=>27416782, :minor_gc_count=>26, :major_gc_count=>10, :remembered_wb_unprotected_objects=>21687, :remembered_wb_unprotected_objects_limit=>40076, :old_objects=>531013, :old_objects_limit=>895436, :oldmalloc_increase_bytes=>191152, :oldmalloc_increase_bytes_limit=>27318891}
note: :minor_gc_count=>26, :major_gc_count=>10,
so tunable is "working"
just not making a difference in any memory for console or runner
now if we want to add a larger workload with the same tuning I'm all for it
say do a refresh and measure memory/timing/GC.stat and refresh the medium or large sized provider with the gc tuning to see difference
but given the above with the console/runner, it's suspicious why on an appliance I see absolutely no difference in memory utilization with forcing the GC to do full gc runs
or "major" gc runs
Matthew Draper
@matthewd
Oct 06 2015 16:53
I'm not sure I follow the theory that that variable is a magic "make GC do what we want" setting
If we really want a comparison, wouldn't we be better doing a run of master with ruby 2.0?
Joe Rafaniello
@jrafanie
Oct 06 2015 16:55
Interesting @akrzos RE: magic "make GC do what we want" results
Alex Krzos
@akrzos
Oct 06 2015 16:56
Interesting looking at the difference in number of allocated objects
Joe Rafaniello
@jrafanie
Oct 06 2015 16:56
I don't know that we could easily get master to work with 2.0.0 @matthewd, I think your heap dump suggestion would be a better use of @akrzos limited time
Alex Krzos
@akrzos
Oct 06 2015 16:56
opps, that was pages vs objects
not that significant
Matthew Draper
@matthewd
Oct 06 2015 16:57
@jrafanie weren't people still happily using 2.0 a couple of days ago?
Jason Frey
@Fryguy
Oct 06 2015 16:57
yeah, wouldn't one just have to remove the Ruby check in the Gemfile?
Joe Rafaniello
@jrafanie
Oct 06 2015 16:58
that and install it from source/rpm, that's something we can do easily... I think @kbrock tried the opposite the other day... 2.2 on 5.4
Jason Frey
@Fryguy
Oct 06 2015 16:58
oh i see
is this not an MIQ appliance?
(I ask because I'm concerned that SCL Ruby is not the same as "real" Ruby)
Joe Rafaniello
@jrafanie
Oct 06 2015 17:00
I'll do the 2.0 on master rails console/runner test here... no need to have @akrzos do that... I'd rather we get a good heap dump and instructions on how to use his environment ;-)
Alex Krzos
@akrzos
Oct 06 2015 17:01
Everything I test on is an appliance
Joe Rafaniello
@jrafanie
Oct 06 2015 17:01
upstream appliance or 5.5?
Alex Krzos
@akrzos
Oct 06 2015 17:01
depends, usually 5.5 or 5.4 or 5.3
not really 5.3 anymore
too old
Joe Rafaniello
@jrafanie
Oct 06 2015 17:02
but not upstream/manageiq appliance with ruby built from source
right?
Alex Krzos
@akrzos
Oct 06 2015 17:03
built from our release team
not from source
Joe Rafaniello
@jrafanie
Oct 06 2015 17:05
thanks... that's what @Fryguy was implying... ruby from ruby-lang.org or from RPM
Oleg Barenboim
@chessbyte
Oct 06 2015 17:05
@Fryguy if it were upstream appliances, I am hoping @akrzos would reference them by their version names
Matthew Draper
@matthewd
Oct 06 2015 17:05
I do still think life would be simpler, for testing theories etc, with an upstream appliance
Alex Krzos
@akrzos
Oct 06 2015 17:07
@chessbyte on my upstream appliances it can be difficult to make sure I am as close to dev since VERSION == "master" in vmdb directory so I usually grab a new master and mark it by date in my environment
also I leverage qe's framework thus staying as close to dev on upstream can mean that breaks and I might have to try and find a fix before they will find the issue though usually I can ping them on irc and get a fix fast
Oleg Barenboim
@chessbyte
Oct 06 2015 17:08
@akrzos we probably should be clear on terminology for all these appliances so that everyone is on the same page
Dennis Metzger
@dmetzger57
Oct 06 2015 17:13
@akrzos i believe we can run test / modified code on your existing appliances and not have to deploy new appliances in your environment. So ssh and http access to the appliances should get us what we need (and knowing the lay of the land). If anyone knows / thinks differently, please chat up :smile:
Alex Krzos
@akrzos
Oct 06 2015 17:13
All miq master testing I've done as of recently has only been to test @dmetzger57's patch and all of the memory utilization benchmarking thus far has been 5.5 and comparatively to 5.4
Joe Rafaniello
@jrafanie
Oct 06 2015 17:23
FYI, preliminary rails runner testing with ruby 2.0 on master indicates it's pretty close to 2.2 memory usage (on osx)
for i in `seq 1 10`; do bundle exec bin/rails r 'require "miq-process"; puts RUBY_VERSION.to_s + " " + MiqProcess.processInfo()[:memory_usage].to_s' | tee -a runner_2_0_0_master_memory.out; done

2.0.0 180170752
2.0.0 176508928
2.0.0 165875712
2.0.0 181821440
2.0.0 164585472
2.0.0 178896896
2.0.0 183861248
2.0.0 180228096
2.0.0 167464960
2.0.0 174133248
for i in `seq 1 10`; do bundle exec bin/rails r 'require "miq-process"; puts RUBY_VERSION.to_s + " " + MiqProcess.processInfo()[:memory_usage].to_s' | tee -a runner_2_2_3_master_memory.out; done

2.2.3 184647680
2.2.3 176271360
2.2.3 172199936
2.2.3 176349184
2.2.3 174579712
2.2.3 179593216
2.2.3 170369024
2.2.3 178270208
2.2.3 172056576
2.2.3 167092224
Jason Frey
@Fryguy
Oct 06 2015 17:25
I'm just wondering if we have the same memory "issues" on an upstream appliance
or if this is downstream specific
it may dictate how we go about solving it, and/or determining the next place to look
Alex Krzos
@akrzos
Oct 06 2015 17:26
hmm, perhaps i should setup a 3rd appliance (master) and connect it to the same environment as the 5.4/5.5 appliances and track worker memory utilization in addition
Matthew Draper
@matthewd
Oct 06 2015 17:27
@jrafanie isn't that entirely against our expectations? :confused:
Jason Frey
@Fryguy
Oct 06 2015 17:28
yeah, that's not what i expected
Dennis Metzger
@dmetzger57
Oct 06 2015 17:28
base sizes are similar, what we don’t know is how GC under our load behaves between the versions, true?
Jason Frey
@Fryguy
Oct 06 2015 17:28
5.4 is Ruby 2.0, right?
Matthew Draper
@matthewd
Oct 06 2015 17:30
@dmetzger57 false (as I understood it)… it was my understanding that 5.5's base size was much higher than 5.4's, and we were just ignoring that as a write-off
Joe Rafaniello
@jrafanie
Oct 06 2015 17:30
@matthewd yes, I was under the impression that some of the increased memory usage of the base rails app, runner, was because of the generational GC but apparently not much at all
Alex Krzos
@akrzos
Oct 06 2015 17:32
@jrafanie Can you verify on you see a similar number of GC.count major/minor (Ruby 2.2) vs my numbers above?
perhaps try the export too
didn't you see memory utilization drop with export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9
Joe Rafaniello
@jrafanie
Oct 06 2015 17:34
:minor_gc_count=>26, :major_gc_count=>9 Ruby 2.2.3 with no magic limit factor
Yes, export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9, gives me much different numbers for ruby 2.2.3
2.2.3 147881984
2.2.3 148910080
2.2.3 150216704
Alex Krzos
@akrzos
Oct 06 2015 17:35
anyway you can switch ruby to ruby 2.2.2p95
Joe Rafaniello
@jrafanie
Oct 06 2015 17:36
i can install it from source
I always forget to specify no RDOC --disable-install-doc when I install ruby from source
Matthew Draper
@matthewd
Oct 06 2015 17:40
2.2.2 seems to be about 5MB less than 2.2.3
Joe Rafaniello
@jrafanie
Oct 06 2015 17:41
@akrzos :minor_gc_count=>1, :major_gc_count=>170 Ruby 2.2.3 with export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9
Matthew Draper
@matthewd
Oct 06 2015 17:43
.. and now they seem the same, so maybe it was just sun spots
Joe Rafaniello
@jrafanie
Oct 06 2015 17:43
@matthewd GC is toying with you...
Alex Krzos
@akrzos
Oct 06 2015 17:45
@jrafanie looks like a lot more major gc than me however that is OS X ( w/ 2.2.3) vs RHEL7(w/ 2.2.2) so...
sun spots
performance hates them
Joe Rafaniello
@jrafanie
Oct 06 2015 17:45
2.2.2 172933120
2.2.2 172982272
2.2.2 182046720
2.2.2 169934848
2.2.2 181424128
2.2.2 174379008
2.2.2 177934336
2.2.2 175190016
2.2.2 looks close enough to 2.2.3 and 2.0.0 on master to me
Alex Krzos
@akrzos
Oct 06 2015 17:45
it is finally sunny in raleigh maybe that why results are different.. :smile:
Matthew Draper
@matthewd
Oct 06 2015 17:46
Throwing a ManageIQ::Providers::Vmware::InfraManager::Vm in there is good for +40MB
Joe Rafaniello
@jrafanie
Oct 06 2015 17:47
Good to know @matthewd for forking workers ;-)
Jason Frey
@Fryguy
Oct 06 2015 17:51
The VMware manager is loading the VMMappingRegistry probably
We delay load that though, but I believe aout 80% of it gets loaded eventually as you prime the broker
Matthew Draper
@matthewd
Oct 06 2015 17:52
@Fryguy it's loading all the managers, for a start
.. and all their respective nested classes :neutral_face:
Joe Rafaniello
@jrafanie
Oct 06 2015 17:54
FYI, it's not fog fog (~> 1.29.0) -> fog (~> 2.0.0.pre.0), just compared 5.4.z to master requiring fog, they're within a few MB of each other
Jason Frey
@Fryguy
Oct 06 2015 17:55
yeah, but the manager should delay load it's corresponding bits...or at least I thought it did
even so, VimMappingRegistry is huge when fully loaded, but that too should be delayed
Oleg Barenboim
@chessbyte
Oct 06 2015 17:55
agree that we should delay load details about a provider
Matthew Draper
@matthewd
Oct 06 2015 17:56
.. until we switch to forking
Jason Frey
@Fryguy
Oct 06 2015 17:57
once we have forking we have 2 options...eager load EVERYTHING, or eager load Rails, and only preload in the server right before we fork for a worker. The latter requires more investigation but would give us the best of both worlds
Joe Rafaniello
@jrafanie
Oct 06 2015 17:57
FYI, this is what 5.4.z with 2.0.0 rails runner looks like on osx
2.0.0 143437824
2.0.0 142155776
2.0.0 137371648
2.0.0 140697600
2.0.0 140161024
2.0.0 132935680
2.0.0 144216064
2.0.0 132657152
2.0.0 139538432
2.0.0 136904704
Jason Frey
@Fryguy
Oct 06 2015 17:58
right now, it's preferable to delay load, but that makes undoing that later a lot harder
the two directions are orthogonal
Joe Rafaniello
@jrafanie
Oct 06 2015 17:59
yeah, and unless we really delay load for specific worker types, most workers may eventually load it anyway
(in general)
Jason Frey
@Fryguy
Oct 06 2015 17:59
yeah that's where moving it to loading in the server is beneficial
or rather, loading EVERYTHING up front might be benficial
down side to loading everything up front is dev/test modes takes longer to start
Oleg Barenboim
@chessbyte
Oct 06 2015 18:00
yeah, but loading VIM stuff that only a few workers actually need/use is senseless for the 90% of workers that don't need it
Matthew Draper
@matthewd
Oct 06 2015 18:00
Traditionally one eager loads in production only
Oleg Barenboim
@chessbyte
Oct 06 2015 18:01
I guess I don't understand this eager loading the world
Joe Rafaniello
@jrafanie
Oct 06 2015 18:01
So, we have roughly 30 MB more memory in rails runner on master, while interesting, that's not orders of magnitude greater than before... while the memory usage during refresh and possibly other workers is much worse
Matthew Draper
@matthewd
Oct 06 2015 18:02
@jrafanie are you in a position to try a (mini) refresh, to check master + 2.0 ?
Jason Frey
@Fryguy
Oct 06 2015 18:03
@chessbyte it's all about shared memory
Oleg Barenboim
@chessbyte
Oct 06 2015 18:03
@matthewd @jrafanie @Fryguy eager-loading the world seems so counter-intuitive to everything I ever learned in Computer Science, that I question the premise
Joe Rafaniello
@jrafanie
Oct 06 2015 18:03
@matthewd not comparable to @akrzos's environment
Matthew Draper
@matthewd
Oct 06 2015 18:03
With the above, it's sounding like we might be able to stop distrusting the GC, and move on to worrying about why we're allocating a bunch more stuff… but we should confirm first
Jason Frey
@Fryguy
Oct 06 2015 18:03
@chessbyte forking leverages shared memory
so if you load in the server, the workers get the stuff without having to load it themselves
Matthew Draper
@matthewd
Oct 06 2015 18:04
Doesn't need to be, if you have something you can run a master + 2.0 and a master + 2.2 against, I don't think
Jason Frey
@Fryguy
Oct 06 2015 18:04
so, it's 1 process holding the memory versus N processes
of course, if you never use VMWare, then loading it at all it mostly pointless
Joe Rafaniello
@jrafanie
Oct 06 2015 18:04
yeah, I can do that @matthewd, let me grab a late bite to eat, then I'll try it
Matthew Draper
@matthewd
Oct 06 2015 18:04
I guess it depends how large the environment needs to be before it shows measurable memory usage
Oleg Barenboim
@chessbyte
Oct 06 2015 18:04
yes, and if this appliance is only configured to manage Amazon, what are the benefits of loading details of every other provider??
Jason Frey
@Fryguy
Oct 06 2015 18:05
none
but there is development overhead in figuring that out
versus maybe taking a 10MB hit on only 1 process
Oleg Barenboim
@chessbyte
Oct 06 2015 18:05
and as we go from 6 providers to 26?
Jason Frey
@Fryguy
Oct 06 2015 18:05
yes, definitely
Oleg Barenboim
@chessbyte
Oct 06 2015 18:05
I understand pre-loading common things maybe
Jason Frey
@Fryguy
Oct 06 2015 18:06
which is why preloading in the server before fork for a pariticular worker is the most optimal approach
so basically, don't preload, say, Amazon, until we are about to launch an amazon worker...then preload amazon stuff, then fork
Oleg Barenboim
@chessbyte
Oct 06 2015 18:06
but some appliances have no UI nor Web Service Workers - why load the UI stuff for them?
Jason Frey
@Fryguy
Oct 06 2015 18:07
because it's likely very little overhead for a single process for UI/Web Service
and to enable you tick a flag in environment.rb (more or less), so dev effort is nothing
Oleg Barenboim
@chessbyte
Oct 06 2015 18:07
I would like to understand that better before we go and pre-load the world and run out of memory
as we are already doing for some reason
Jason Frey
@Fryguy
Oct 06 2015 18:08
I'm curious what a fully-preloaded server entails memory-wise
Matthew Draper
@matthewd
Oct 06 2015 18:08
We're doing it now because we have 10000 processes, each loading a decent chunk of the code, and not sharing any of it
Oleg Barenboim
@chessbyte
Oct 06 2015 18:08
kind of odd, because I had thought that going up in Ruby versions would improve memory management
Alex Krzos
@akrzos
Oct 06 2015 18:08
@jrafanie I can send you environment details to try a refresh against
Matthew Draper
@matthewd
Oct 06 2015 18:08
(NB, slight exageration)
Oleg Barenboim
@chessbyte
Oct 06 2015 18:08
@matthewd we only have 10-12 processes per appliance (typically)
Matthew Draper
@matthewd
Oct 06 2015 18:09
Right… 10-12 * 50% of the code is a lot more than 1 * 100%
Oleg Barenboim
@chessbyte
Oct 06 2015 18:09
@matthewd if we are going to address performance, hyperbole does not help
Jason Frey
@Fryguy
Oct 06 2015 18:09
forking workers (without any preloading) is saving 50MB already
from just Ruby source, most likely
50 MB per process
Oleg Barenboim
@chessbyte
Oct 06 2015 18:10
that is interesting - but if I recall correctly, Rich's VIM stuff is VERY expensive memory-wise
Joe Rafaniello
@jrafanie
Oct 06 2015 18:10
@matthewd I think we're closer to 20-25 workers normally
Jason Frey
@Fryguy
Oct 06 2015 18:10
oh i thought it was 50..my bad
Oleg Barenboim
@chessbyte
Oct 06 2015 18:11
@jrafanie that's good to know - would be good to start collecting this information from actual customer logs
Joe Rafaniello
@jrafanie
Oct 06 2015 18:12
that's totally a guess based on prior QE reports... I think it's definitely 20-25 plus or minus 5
i have a bug report that might have more details, let me dig it up
Oleg Barenboim
@chessbyte
Oct 06 2015 18:13
what we actually need, gang, is a way to self-tune or minimally self-recommend our appliance
because a CF Admin has no clue, really
Jason Frey
@Fryguy
Oct 06 2015 18:15
I envision that the way it will work in the future is, in production, we enable Rails eager_load, which loads all models, controllers, etc in the server...in addition worker forking will have a method called preload. The preload method will require 'manageiq-providers-vmware' gem (for example) in the server along with any other pieces needed. Requiring the gem will preload the whole gem and all its depedencies.
Thus, we don't load heavy provider stuff until the last possible minute, but when we do need it, it's in the server for forking purposes
Matthew Draper
@matthewd
Oct 06 2015 18:15
:+1:
(Though I'd like to think we can eventually end up with some threading, to address some of the last-minute sharing, too… but that's going to depend on our e.g. VIM stuff being thread-safe.)
Joe Rafaniello
@jrafanie
Oct 06 2015 18:18
Right @Fryguy, and we'd only preload providers based on the worker type's preload method so workers would know what types of code they need... so we don't preload provider stuff for UI/webservice only appliance...
and we'd use the pre_fork mechanism to tell the worker type's we're about to start to preload in the server process before we fork
well, that's the idea at least
Jason Frey
@Fryguy
Oct 06 2015 18:40
@matthewd On the threading, are you thinking of sidekiq-like workers?
Matthew Draper
@matthewd
Oct 06 2015 18:45
@Fryguy quite possibly. Especially as we expand the set of providers, it seems unfortunate to be spinning up separate sets of processes for each one.
Jason Frey
@Fryguy
Oct 06 2015 18:45
My big concern there is that ultimately, at least for providers, we won't own them and threading is hard
Matthew Draper
@matthewd
Oct 06 2015 18:46
But depending on how easy it seems to get there, we also have the option to just introduce some more minor sharing to try to cut down that 20
Threading's only hard if you're sharing state
Jason Frey
@Fryguy
Oct 06 2015 18:46
My big concern there is that ultimately, at least for providers, we won't own them and threading is hard... Wouldn't want one provider to bring down the others... Process isolation is really nice for that
Matthew Draper
@matthewd
Oct 06 2015 18:46
(thus, the raw provider lib needs to be thread safe)
Jason Frey
@Fryguy
Oct 06 2015 18:47
With my experience on the bot, I had to wrap every external service "just in case", which is kind of annoying
Matthew Draper
@matthewd
Oct 06 2015 18:47
If a process that has write access to the database is dying, you're probably not in a good situation anyway ¯\_(ツ)_/¯
Jason Frey
@Fryguy
Oct 06 2015 18:48
I'm more concerned about something doing , say
...a blocking call and locking up a worker
Matthew Draper
@matthewd
Oct 06 2015 18:48
I don't see why… again, unless said service is using global state
As in a C bug?
Jason Frey
@Fryguy
Oct 06 2015 18:49
Could be a Ruby leak in a worker...one provider could bring down the shared sidekiq pool
At least with processes a system monitor could whack the process... Not sure what you'd do with single process
Matthew Draper
@matthewd
Oct 06 2015 18:50
You're still monitoring the process, and restarting it if it goes away
Jason Frey
@Fryguy
Oct 06 2015 18:51
This is all theoretical and devils advocate... I'm actually interested in thread based for certain operations, particularly core stuff, like reporting
Keenan Brock
@kbrock
Oct 06 2015 18:51
longer term thought: I wonder if splitting up our workers to not all depend upon eachother may allow us to not load everthing into every workspace
right now we have 1 master process, and fork from there - so all our code is together.
Jason Frey
@Fryguy
Oct 06 2015 18:52
How do you mean, @kbrock ?
Oh i see
Keenan Brock
@kbrock
Oct 06 2015 18:52
what if each worker were it's own gem / own requirements?
not for every provider
Matthew Draper
@matthewd
Oct 06 2015 18:52
@kbrock isn't that exactly how it works at the moment?
(due to lazy loading)
Keenan Brock
@kbrock
Oct 06 2015 18:53
but the base process for all of them are all upon the same code base
our base for rails is pretty big
I'll need to think more on this. not a short term solution either.
Matthew Draper
@matthewd
Oct 06 2015 18:53
:-1: splitting the rails app
Keenan Brock
@kbrock
Oct 06 2015 18:54
I was looking at tests, and running the controller tests separate from the model tests may give us a win
Jason Frey
@Fryguy
Oct 06 2015 18:55
Well i wouldn't mind the UI being an engine
Keenan Brock
@kbrock
Oct 06 2015 18:55
@matthewd - is using engines in a single repo the same as "splitting up the rails app?"
Jason Frey
@Fryguy
Oct 06 2015 18:55
Curious how that would affect code climate
Keenan Brock
@kbrock
Oct 06 2015 18:55
lol
yay @Fryguy +1 for similar thoughts
Jason Frey
@Fryguy
Oct 06 2015 18:55
OK, gotta run... Be back in a long while
Matthew Draper
@matthewd
Oct 06 2015 18:56
@kbrock yes.
Keenan Brock
@kbrock
Oct 06 2015 18:57
@matthewd everything uses (race car) engines these days. Either that or ball bearings
Keenan Brock
@kbrock
Oct 06 2015 19:03
@matthewd and to confirm - you are discouraging using engines? (Even if they all live in a single git repo.)
Matthew Draper
@matthewd
Oct 06 2015 19:04
@kbrock in principle, yes, I'm discouraging using engines (plural) to spread out "the app"
I'm open to being convinced there are very strong lines of demarcation around a particular thing, or something… but as a rule, I really think it'd just create a dependency mess
.. noting this is in contrast to splitting the providers into engines, which I'm obviously in favour of
The difference there is in the lack of dependencies… the deps are all on and between the in-app base classes, and the provider engine is merely providing a concrete subclass
Keenan Brock
@kbrock
Oct 06 2015 19:14
wish the split up of the providers didn't use STI - that sure adds a layer of complexity with our loaders and all.
but I'm always harshing on STI- whether warranted or not.
Matthew Draper
@matthewd
Oct 06 2015 19:16
The loader is only necessary because we do MiddleClass.all
(where BottomClass < MiddleClass < TopClass)
And STI or not, we'd need some way to do that sort of filtering
Well, that and the fact we don't eager load
Keenan Brock
@kbrock
Oct 06 2015 19:18
k
Alex Krzos
@akrzos
Oct 06 2015 20:08
@matthewd Looking at the heap dumping on sam saffron's page (http://samsaffron.com/archive/2015/03/31/debugging-memory-leaks-in-ruby) Do you have a suggested start/stop or start/dump point for this or should I simple attempt it in the rails console?
Matthew Draper
@matthewd
Oct 06 2015 20:09
I think we'd want to start tracing allocations at the top of environment.rb, so it covers everything
As for the stop.. that's trickier
I think there would be useful things to learn from a dump after a refresh had completed
But I think the most informative would be for us to find a way to grab a dump while it's in the middle of processing
.. maybe by hacking a if name_of_this_object == 'some particular thing'; dump now; end somewhere
Alex Krzos
@akrzos
Oct 06 2015 20:11
so could I simply wrap a EmsRefresh.refresh provider with start tracing and then dump when it completes
I gotta run, but I'll give that a shot tonight so we should have some results for that on 5.5.0.3
Matthew Draper
@matthewd
Oct 06 2015 20:12
Put the start into the top of environment.rb
Alex Krzos
@akrzos
Oct 06 2015 20:12
gotcha, I'll try that
Jason Frey
@Fryguy
Oct 06 2015 20:14
What about tapping a signal
Trapping
Matthew Draper
@matthewd
Oct 06 2015 20:16
@Fryguy hmm.. perhaps. Though that kinda shifts the problem to observing it and sending the signal at an appropriate time.
Jason Frey
@Fryguy
Oct 06 2015 20:18
Yeah, depends what kind of measurement we want
Joe Rafaniello
@jrafanie
Oct 06 2015 20:43
@matthewd I finally got our tiny vmware environment refreshed with my mac on master for both 2.2.3 and 2.0.0
getting the second run with 2.2.3 for comparison
Joe Rafaniello
@jrafanie
Oct 06 2015 20:51
Console memory BEFORE/AFTER refresh
Ruby  RES       VIRT
2.0.0 195731456 2727268352
2.0.0 378908672 2933612544

Ruby  RES       VIRT
2.0.0 196034560 2734608384
2.0.0 400244736 2975555584

Ruby  RES       VIRT
2.2.3 220778496 2752442368
2.2.3 414134272 3044147200

Ruby  RES       VIRT
2.2.3 210280448 2751393792
2.2.3 401678336 3040940032
Going to try @akrzos environment now, those numbers are pretty close
For reference...
2.2.3: Refreshing all targets...Completed in 49.056967s
Joe Rafaniello
@jrafanie
Oct 06 2015 20:56
2.0.0: Refreshing all targets...Completed in 63.526953s
Joe Rafaniello
@jrafanie
Oct 06 2015 21:16
FYI, using @akrzos's medium environment, looks promising seeing a disparity in the memory usage on 2.0.0 vs. 2.2.3 on master on my mac
numbers coming...
Joe Rafaniello
@jrafanie
Oct 06 2015 21:21
@matthewd ^ check out these numbers...
bin/rails console memory BEFORE/AFTER refresh
Ruby  RES       VIRT
2.0.0 197672960 2742734848
2.0.0 578842624 3168772096

Ruby  RES       VIRT
2.0.0 200744960 2729365504
2.0.0 587640832 3169820672

Ruby  RES       VIRT
2.2.3 212127744 2751393792
2.2.3 809127936 3832860672

Ruby  RES       VIRT
2.2.3 217649152 2730422272
2.2.3 811941888 3810775040

Ruby  RES       VIRT
2.2.3 224444416 2753490944
2.2.3 841822208 3829891072
note, we're trading memory for this:
2.0.0: Refreshing all targets...Completed in 129.948788s
2.0.0: Refreshing all targets...Completed in 123.5024s
2.2.3: Refreshing all targets...Completed in 97.633735s
2.2.3: Refreshing all targets...Completed in 84.944437s
2.2.3: Refreshing all targets...Completed in 84.18306s
Note, I blew away my database and did the necessary steps to create the provider with creds between each run
I'll try tracing object allocations during refresh now
Jason Frey
@Fryguy
Oct 06 2015 21:24
what are the multiple Rubys
like you have 2.2.3 3 times
is that just 3 runs?
Joe Rafaniello
@jrafanie
Oct 06 2015 21:25
yes
We start about 210 MB in rails console on 2.2.3, go up to 800+ MB (if I did the math right)
Jason Frey
@Fryguy
Oct 06 2015 21:26
so that last chart is saying 2.2.0 hits ~580 MB and 2.2.3 hits ~815 MB ?
Joe Rafaniello
@jrafanie
Oct 06 2015 21:26
on 2.0.0, we start at roughly 200 MB and go up to 580 MB or so after refresh
yes
Jason Frey
@Fryguy
Oct 06 2015 21:27
I mean, I expect a memory bump, but that just feels crazy high
Joe Rafaniello
@jrafanie
Oct 06 2015 21:27
I did multiple runs ot make sure it was consistent
take a look at my test of our 252.14 environment earlier... they were nearly identical memory numbers for 2.0.0 and 2.2.3
Jason Frey
@Fryguy
Oct 06 2015 21:27
can you do a ObjectSpace dump on both versions?
and we can drop them into @tenderlove's tool
Joe Rafaniello
@jrafanie
Oct 06 2015 21:28
yeah
Jason Frey
@Fryguy
Oct 06 2015 21:28
curious about the object proportions
Joe Rafaniello
@jrafanie
Oct 06 2015 21:28
well, I can do 2.2.3, can't do 2.0.0
Jason Frey
@Fryguy
Oct 06 2015 21:28
no?
damn
Joe Rafaniello
@jrafanie
Oct 06 2015 21:28
dump_all was in 2.1
Jason Frey
@Fryguy
Oct 06 2015 21:28
then it's not useful
can you try Ruby 2.1?
Joe Rafaniello
@jrafanie
Oct 06 2015 21:29
sure
just need to bundle a bunch of gems
Jason Frey
@Fryguy
Oct 06 2015 21:29
:)
Joe Rafaniello
@jrafanie
Oct 06 2015 21:30
there's a chance that there are specific types of CIs we're adding in the medium environment that we aren't adding in the 252.14 environment that's causing the higher memory usage
Jason Frey
@Fryguy
Oct 06 2015 21:30

curious about the object proportions

I'm wondering the bump in memory is proportional across object types, or if a particular object is growing more

if we're apples to apples, then it shouldn't matter, right?
Joe Rafaniello
@jrafanie
Oct 06 2015 21:30
remember the metadata we used to get that killed us? was it vnic info?
Jason Frey
@Fryguy
Oct 06 2015 21:31
HostStorage data
Joe Rafaniello
@jrafanie
Oct 06 2015 21:31
right
storages that a host could see, it had billions to things attached to it
ok, running with 2.1.7 now
weird
Ruby  RES       VIRT
2.1.7 248877056 2786353152
2.1.7 743108608 3535527936
Joe Rafaniello
@jrafanie
Oct 06 2015 21:37
for reference, 2.0.0 was doing 200 -> 580 MB, 2.2.3 was doing 210 -> 800+ MB
doing second 2.1.7 run to see if it's similar
ok, updated numbers, I'll try dumping the heap now
2.0.0
197672960 2742734848
578842624 3168772096

200744960 2729365504
587640832 3169820672
2.1.7
248877056 2786353152
743108608 3535527936

252518400 2782158848
733106176 3537625088
2.2.3
212127744 2751393792
809127936 3832860672

217649152 2730422272
811941888 3810775040

224444416 2753490944
841822208 3829891072
2.0.0: Refreshing all targets...Completed in 129.948788s
2.0.0: Refreshing all targets...Completed in 123.5024s
2.1.7: Refreshing all targets...Completed in 84.475841s
2.1.7: Refreshing all targets...Completed in 85.133016s
2.2.3: Refreshing all targets...Completed in 97.633735s
2.2.3: Refreshing all targets...Completed in 84.944437s
2.2.3: Refreshing all targets...Completed in 84.18306s
Jason Frey
@Fryguy
Oct 06 2015 21:41
2.1 looks better...I say we downgrade ;)
Joe Rafaniello
@jrafanie
Oct 06 2015 21:41
lol, yeah
higher initial memory, though
So, shall I dump the heap after the refresh?
Jason Frey
@Fryguy
Oct 06 2015 21:42
I"m not sure...it's not too far off from 2.2
I was hoping if it was closer to the 500MB we could see a difference
Joe Rafaniello
@jrafanie
Oct 06 2015 21:51
it might take a while to get a dump, segfaulted the first try
Joe Rafaniello
@jrafanie
Oct 06 2015 22:00
Ok, so I created a dump for 2.2.3 but I also grabbed the top allocations by line number... m is estimated memory at the line number: Sum of all memsize_of(object) for objects on that line number
this is just refresh...
m:  76,731,422 | c: 417,321 | :
m:  33,686,400 | c: 145,200 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1197
m:  19,152,912 | c:  39,480 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1214
m:  14,514,128 | c:  60,494 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2257
m:  12,414,448 | c:  22,312 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:457
m:  12,358,143 | c:   8,082 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/autosave_association.rb:152
m:  10,423,217 | c:   7,113 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/core_ext/hash/transform_values.rb:9
m:  10,392,235 | c:   7,086 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute_methods/dirty.rb:165
m:   9,891,437 | c: 139,954 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:168
m:   7,738,585 | c: 105,200 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute.rb:5
m:   6,948,794 | c:  96,501 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute.rb:9
m:   6,920,706 | c:  15,484 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:274
m:   6,607,856 | c:  26,444 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2345
m:   6,600,622 | c: 157,203 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute_set/builder.rb:79
m:   6,326,738 | c:   6,299 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/result.rb:116
m:   6,314,400 | c:  14,100 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2252
m:   5,103,171 | c: 126,768 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1230
m:   4,647,376 | c:  10,678 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activemodel-4.2.4/lib/active_model/attribute_methods.rb:383
m:   3,457,681 | c:  86,227 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2239
m:   3,274,158 | c:   7,090 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/hash_with_indifferent_access.rb:84
m:   2,520,564 | c:  14,622 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:547
m:   2,416,000 | c:   2,000 | /Users/joerafaniello/Code/manageiq/app/models/manageiq/providers/vmware/infra_manager/refresh_parser.rb:725
m:   2,128,216 | c:  43,465 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/type/string.rb:35
m:   2,089,695 | c:  19,847 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/associations.rb:162
m:   2,077,488 | c:  14,619 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:554
m:   1,887,033 | c:   6,594 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/relation.rb:34
m:   1,849,581 | c:   1,015 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/core_ext/hash/keys.rb:10
m:   1,812,843 | c:   7,899 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/relation.rb:23
m:   1,739,508 | c:  43,329 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/result.rb:110
m:   1,686,192 | c:  11,222 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1222
m:   1,643,652 | c:   7,085 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activemodel-4.2.4/lib/active_model/dirty.rb:199
m:   1,507,186 | c:  37,516 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/type/string.rb:17
m:   1,487,618 | c:  18,535 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/li
c is how many objects were created at that line number
I think the : is anything created in c code
Jason Frey
@Fryguy
Oct 06 2015 22:02
shocker: VMwareWebService/VimService.rb
Joe Rafaniello
@jrafanie
Oct 06 2015 22:02
This below, is sorted by just number of allocations at the line numbers
m:  76,731,422 | c: 417,321 | :
m:   6,600,622 | c: 157,203 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute_set/builder.rb:79
m:  33,686,400 | c: 145,200 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1197
m:   9,891,437 | c: 139,954 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/connection_adapters/postgresql/database_statements.rb:168
m:   5,103,171 | c: 126,768 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1230
m:   7,738,585 | c: 105,200 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute.rb:5
m:   6,948,794 | c:  96,501 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute.rb:9
m:   3,457,681 | c:  86,227 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2239
m:  14,514,128 | c:  60,494 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2257
m:   2,128,216 | c:  43,465 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/type/string.rb:35
m:   1,739,508 | c:  43,329 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/result.rb:110
m:  19,152,912 | c:  39,480 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/VimService.rb:1214
m:   1,507,186 | c:  37,516 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/type/string.rb:17
m:   1,480,287 | c:  36,963 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/hash_with_indifferent_access.rb:273
m:   1,361,216 | c:  34,011 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/core_ext/hash/keys.rb:12
m:   1,127,041 | c:  27,961 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2050
m:   6,607,856 | c:  26,444 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2345
m:  12,414,448 | c:  22,312 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:457
m:   2,089,695 | c:  19,847 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/associations.rb:162
m:     752,832 | c:  18,706 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/arel-6.0.3/lib/arel/table.rb:100
m:   1,487,618 | c:  18,535 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/attribute_methods.rb:359
m:   6,920,706 | c:  15,484 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activesupport-4.2.4/lib/active_support/dependencies.rb:274
m:     619,382 | c:  15,445 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/arel-6.0.3/lib/arel/predications.rb:16
m:   1,346,734 | c:  15,151 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/dynamic_matchers.rb:37
m:     767,115 | c:  15,060 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:518
m:     639,253 | c:  14,934 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/psych-2.0.15/lib/psych.rb:376
m:   2,520,564 | c:  14,622 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:547
m:     724,438 | c:  14,619 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:546
m:   2,077,488 | c:  14,619 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/core.rb:554
m:     575,253 | c:  14,278 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/connection_adapters/abstract_adapter.rb:269
m:     575,761 | c:  14,261 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/activerecord-4.2.4/lib/active_record/associations/association_scope.rb:15
m:   6,314,400 | c:  14,100 | /Users/joerafaniello/Code/manageiq/gems/pending/VMwareWebService/MiqVimInventory.rb:2252
m:     530,923 | c:  13,250 | /Users/joerafaniello/.gem/ruby/2.2.3/gems/arel-6.0.3/lib/arel/table.rb:18
m:   1,159,760 | c:  13,178 | /Users/joeraf
Keenan Brock
@kbrock
Oct 06 2015 22:40
@jrafanie I thought the code I gave you would shorten the file names too. hmm
Joe Rafaniello
@jrafanie
Oct 06 2015 22:40
yeah, I didn't like that because I couldn't easily open them locally
command + click on the path in xterm opens the file in my editor
Keenan Brock
@kbrock
Oct 06 2015 22:42
export EDITOR=subl?
Joe Rafaniello
@jrafanie
Oct 06 2015 22:45
yup, laziness wins
command + click the FQ path opens it in sublime
Keenan Brock
@kbrock
Oct 06 2015 23:06
it tells you where, does it tell you what? are these mostly strings?
did we increase "default value" for some of these objects? that first line seems very high.
do we have a similar display for refreshing 5.4?
Keenan Brock
@kbrock
Oct 06 2015 23:13
VimString - huh. I'm assuming this is the same in 5.5 and 5.4
sure feels like json would be a big win. thought changing that right now would probably be cheating.