These are chat archives for ManageIQ/manageiq/performance

15th
Oct 2015
Dennis Metzger
@dmetzger57
Oct 15 2015 01:54
A quick look at RSS sizes for processes running on an idle 5.4 and Master appliances (see https://docs.google.com/a/redhat.com/spreadsheets/d/11qbzT_3g0pyOohqipYZ6p_9psj4FTHvO2f7fbPyTFFU/edit?usp=sharing) shows about 1Gb of growth in Workers/Ruby/Postgres processes. These appliance were deployed but never had a provider added. All the per process sizes are in sheet.
Matthew Draper
@matthewd
Oct 15 2015 08:30
Okay, so evm_server growing by 100MB isn't good
But do we also have two more workers in play? What's up with that?
Dennis Metzger
@dmetzger57
Oct 15 2015 10:54
Don't know, saw that and made a note to walk the config this morning
Jason Frey
@Fryguy
Oct 15 2015 12:09
Automate worker is a new worker type and on by default
Totally forgot about that
There should only be one, though I thought. @gmcculloug ?
Dennis Metzger
@dmetzger57
Oct 15 2015 12:11
nice, i just find that, come to Gitter and the answer is already here :smile:
Greg McCullough
@gmcculloug
Oct 15 2015 14:25
we created two to match the number of workers we had before when it was using the priority worker. we could default to one if needed since it is exposed in the UI so can easily be adjusted.
Matthew Draper
@matthewd
Oct 15 2015 14:30
I don't know about such things, but from the above, I'm reading "we have two high priority workers"
Maybe without automate going through it, there's enough reduction in expected-load that we could reduce that to one?
Jason Frey
@Fryguy
Oct 15 2015 14:35
I think the problem was that automate tends to be more long-running, whereas we try to reserve priority workers to faster, things, particularly for the UI
so if automate was running, then the UI would slow down
we still want 2 priority, I think
technically we have 2 priority and 2 generic, but the generic will run priority work first if it sees it, so it's more like 4 priority workers
matthewd @matthewd mumbles about threads ;)
Jason Frey
@Fryguy
Oct 15 2015 14:55
:D
Did you see mperhams' post on Sidekiq 4?
Joe Rafaniello
@jrafanie
Oct 15 2015 15:12
@dmetzger57 did you have any more data on the per ruby process memory growth?
I did some "idle" worker csv and used gruff to graph them... ETOOLAZYTOGOOGLESPREADSHEET
schedule worker grows at first, but then finally reaches a peak
generic and priority workers on the other hand are still growing...
Matthew Draper
@matthewd
Oct 15 2015 15:13
Slowly?
Joe Rafaniello
@jrafanie
Oct 15 2015 15:14
miq_generic_worker memory_usage.png
Matthew Draper
@matthewd
Oct 15 2015 15:14
They're going to be constantly allocating various strings, so slow growth is expected… and sufficiently-slow growth will mean it takes a while to hit a GC...
Jason Frey
@Fryguy
Oct 15 2015 15:15
do you know what happens at those jumps?
I assume it processes a queue item?
Joe Rafaniello
@jrafanie
Oct 15 2015 15:15
miq_generic_worker old_objects.png
miq_generic_worker major_gc_count minor_gc_count.png
miq_generic_worker heap_live_slots heap_free_slots.png
Matthew Draper
@matthewd
Oct 15 2015 15:15
Okay, I can't explain a 10 MB leap
Jason Frey
@Fryguy
Oct 15 2015 15:15
also for the first chart, can you chart the deltas? It's easier to see spikes that way
(the GC count chart...GC count is hard to see like that cause the number is accumulative)
Joe Rafaniello
@jrafanie
Oct 15 2015 15:16
should I push it to your analyzer so you can do a PR???
LOL
Jason Frey
@Fryguy
Oct 15 2015 15:17
:)
YES
those heap spikes are interesting
like the really sharp spikes
Joe Rafaniello
@jrafanie
Oct 15 2015 15:18
i lol'd that gruff does not start the y axis at zero, so keep that in mind...
Jason Frey
@Fryguy
Oct 15 2015 15:19
oh yeah...i remember that
Joe Rafaniello
@jrafanie
Oct 15 2015 15:25
yeah, i'll clean it up after I dig into the jumps
just refetched the csvs with a few more hours worth of data
Jason Frey
@Fryguy
Oct 15 2015 15:25
nice
Joe Rafaniello
@jrafanie
Oct 15 2015 15:26
miq_generic_worker major_gc_count minor_gc_count.png
miq_generic_worker memory_usage.png
miq_generic_worker heap_live_slots heap_free_slots.png
miq_generic_worker old_objects.png
compare that to the schedule worker (the only other worker that has anything to do)... schedule worker wakes up and puts work on the queue for the generic worker to do
miq_schedule_worker major_gc_count minor_gc_count.png
miq_schedule_worker old_objects.png
miq_schedule_worker heap_live_slots heap_free_slots.png
miq_schedule_worker memory_usage.png
Joe Rafaniello
@jrafanie
Oct 15 2015 15:34
those spikes from the generic worker are the VmdbDatabase.capture_metrics_timer @Fryguy
Dennis Metzger
@dmetzger57
Oct 15 2015 15:42
Here's a sheet that shows the RSS per worker in 5.4 and Master, before a provider is added and after. So base growth between releases and worker growth after adding a provider. https://docs.google.com/a/redhat.com/spreadsheets/d/1lKzXA2uWq8gGj8C_e-D50-W0pVKaSr9SnGgI9kl4yCs/edit?usp=sharing
Joe Rafaniello
@jrafanie
Oct 15 2015 15:45
@dmetzger57 how long is the worker running in the pre-rss and post-rss before you capture the numbers?
And this is with vanilla ruby 2.2.3 (no Gc tuning) master using rails 4.2.3 right?
Dennis Metzger
@dmetzger57
Oct 15 2015 15:46
about 2 minutes before addition and 2 minutes after.
yes, master using rails 4.2.3, no GC tuning
Matthew Draper
@matthewd
Oct 15 2015 15:55
So worker changes are -1 EventHandler, +1 GenericWorker, +2 AutomateWorker
Joe Rafaniello
@jrafanie
Oct 15 2015 16:17
So, what's the game plan? After running for several hours, I'm not convinced there's a leak after looking at my graphs
it seems like the additional workers in master over 5.4 + the "aggressively allocate more memory than needed" ruby 2.2.3 GC seems to be the biggest differences
we found 20 or so MB of new gems/things we load in master/5.5 for all workers that we didn't previously
tldr: I don't see any big wins remaining from the test scenarios we are currently running
tune the GC, dial back the automate worker count to 1
Matthew Draper
@matthewd
Oct 15 2015 16:22
Is it accurate that we've gone up to 3 generic workers?
Joe Rafaniello
@jrafanie
Oct 15 2015 16:22
@dmetzger57 why were you running 3 generic workers
I see:
      :generic_worker:
        :count: 2
in fact, if we're no longer doing automate stuff in the generic worker, we have less need for generic workers on production systems
At this point, I'd like to see QE/Alex/others play with 4.2-stable + GC tuning on 2.2.3 with master/5.5 to see if we can't run with 6 GB of memory and if we need to alter the memory_threshold values in the vmdb.tmpl.yml
Dennis Metzger
@dmetzger57
Oct 15 2015 16:45
@jrafanie my config say 2 Generic Workers. I may have mis-connected the PID to a worker name in the log ¯_(ツ)_/¯
i'll look map again
This is why I the sheet says 3:
Dennis Metzger
@dmetzger57
Oct 15 2015 16:50
INFO -- : MIQ(MiqGenericWorker::Runner#sync_config) ID [74], PID [3088]
INFO -- : MIQ(MiqGenericWorker::Runner#sync_config) ID [75], PID [3091]
INFO -- : MIQ(MiqGenericWorker::Runner#sync_config) ID [87], PID [3094]
Dennis Metzger
@dmetzger57
Oct 15 2015 17:02
going to start with a new appliance instance and count the Worker started log messages
Dennis Metzger
@dmetzger57
Oct 15 2015 17:15
ok, went back to the appliance and PID 3088 is actually MiqEventHandler, now the types / PIDs match what evm:status shows. One cell updated in the spreadsheet
so besides the growth of each worker, there is growth from the 2 new automate workers
Dennis Metzger
@dmetzger57
Oct 15 2015 17:41
So the growth breakdown with respect to workers as seen in my environment (5.4 vs Master):
Prior to addition of a provider the set of workers is 865,960 larger, which includes 349,936 bytes added by the new automate workers.
After adding a provider, the set of workers is 1,174,388 larger which includes 349,936 bytes added by the two new automate workers.
Joe Rafaniello
@jrafanie
Oct 15 2015 18:34
@Fryguy I think there might be some potential in the broker to really drop memory... similar to the refresh worker a la ManageIQ/manageiq#4894
i want to test your PR and compare to this
manageiq_providers_vmware_infra_manager_refresh_worker_26416 heap_live_slots heap_free_slots.png
Can you tell when the refresh finishes?
Check out the broker:
miq_vim_broker_worker_26423 heap_live_slots heap_free_slots.png
6+ million slots against a single vmware large enviroment
Note, that's with 2.2.3, rails 4.2 stable (not much difference due to this) + GC tuning (helps alot but if we never go out of scope, the objects don't get collected until the end)
Joe Rafaniello
@jrafanie
Oct 15 2015 18:39
I'm going to zero in more on the broker to figure out where the free objects jump
Joe Rafaniello
@jrafanie
Oct 15 2015 19:08
@Fryguy I'll have graphs before/after ManageIQ/manageiq#4894 in 20 minutes
Curious, if we free any objects earlier
Joe Rafaniello
@jrafanie
Oct 15 2015 19:37
@Fryguy before 4894...
manageiq_providers_vmware_infra_manager_refresh_worker_26416 memory_usage.png
manageiq_providers_vmware_infra_manager_refresh_worker_26416 heap_live_slots heap_free_slots.png
after that pr, same ems...
manageiq_providers_vmware_infra_manager_refresh_worker_31577 memory_usage.png
manageiq_providers_vmware_infra_manager_refresh_worker_31577 heap_live_slots heap_free_slots.png
Jason Frey
@Fryguy
Oct 15 2015 19:39
different scales, but I think that's better?
Joe Rafaniello
@jrafanie
Oct 15 2015 19:39
yes, sorry, graphing is hard
yes, much better
Jason Frey
@Fryguy
Oct 15 2015 19:40
ok...that's with the broker, right?
Joe Rafaniello
@jrafanie
Oct 15 2015 19:40
yes
Jason Frey
@Fryguy
Oct 15 2015 19:40
did you happen to see if there are still any VimHash objects in the refresh betwen parse and save?
cause that was my magic bullet
if they're gone, then I know it removed all the old data
Joe Rafaniello
@jrafanie
Oct 15 2015 19:41
No, I'm not dumping heaps or looking at classes in there
Jason Frey
@Fryguy
Oct 15 2015 19:41
ok
so is that 120MB less?
or so?
Joe Rafaniello
@jrafanie
Oct 15 2015 19:42
yeah, roughly
We should definitely issue a GC.start at the end of it
Jason Frey
@Fryguy
Oct 15 2015 19:42
manually?
Joe Rafaniello
@jrafanie
Oct 15 2015 19:43
yeah, see the second (improved graphs), it took a while before another full GC occurs
So, as long as your PR didn't break anything, I think it's good... we can certainly dump heaps and try to verify all VimHashes get collected sooner but it's already an improvement
Jason Frey
@Fryguy
Oct 15 2015 19:44
yeah, I'll unWIP
Joe Rafaniello
@jrafanie
Oct 15 2015 19:46
note, that large environment has 3000 vms, 100 hosts, 121 storages
be back in a bit
Joe Rafaniello
@jrafanie
Oct 15 2015 20:15
@Fryguy I found the broker location where we hold onto objects too long
or at last longer than required
i think we have to process the updateObject calls in batches so we can set updateSet = nil earlier
Joe Rafaniello
@jrafanie
Oct 15 2015 20:26
in this single 3000 vm ems, I think we create 2, 2.5 million live objects in that area of code
This message was deleted
Jason Frey
@Fryguy
Oct 15 2015 20:28
but that's in the broker side, right?
which sounds like it would be a nice fix
funny part is that updateSet = nil is pointless
because the next line does a return, and updateSet goes out of scope anyway
Joe Rafaniello
@jrafanie
Oct 15 2015 20:30
hehe
yes, broker
wrong graph...
let me make small graphs, gitter is so slow...
miq_vim_broker_worker_4285 heap_live_slots heap_free_slots.png
Jason Frey
@Fryguy
Oct 15 2015 20:33
can you stack it? ;)
To me this graph makes so much more sense stacked, and you also get the "total" automatically
Joe Rafaniello
@jrafanie
Oct 15 2015 20:34
it doesn't matter for what i'm saying...
Jason Frey
@Fryguy
Oct 15 2015 20:35
the number of live slots there in that spike in the middle is that updateSet line?
Joe Rafaniello
@jrafanie
Oct 15 2015 20:35
the first peak, from around 4.3 M to the GC is all in this code path
Jason Frey
@Fryguy
Oct 15 2015 20:35
ah ok...first peak
Joe Rafaniello
@jrafanie
Oct 15 2015 20:35
looking at second peak now
Jason Frey
@Fryguy
Oct 15 2015 20:35
yeah...i think batching makes a lot of sense
having a lot of gem building problems today :(
An error occurred while installing nokogiri (1.6.6.2), and Bundler cannot continue.
Make sure that `gem install nokogiri -v '1.6.6.2'` succeeds before bundling.
Jason Frey
@Fryguy
Oct 15 2015 20:48
:( sparklemotion/nokogiri#1345
I thought I had already done the xcode cli tools installation...guess I have to do it again
Jason Frey
@Fryguy
Oct 15 2015 20:54
oh yay! it worked
Jason Frey
@Fryguy
Oct 15 2015 20:59
that's called everytime we get a property
Joe Rafaniello
@jrafanie
Oct 15 2015 20:59
I am leaning more towards the latter part of the method
unforunately, it's doing more than one thing at that time
Jason Frey
@Fryguy
Oct 15 2015 21:01
I can't even read that
Joe Rafaniello
@jrafanie
Oct 15 2015 21:01
the status thread is printing the "MiqVimBroker status start" local cache information
Jason Frey
@Fryguy
Oct 15 2015 21:02
and lines 1975/1976 are confusing...is it creating a reference to itself in the Array?
yes, the status thread was found to be very expensive by @akrzos
and it builds the status even if debug mode is disabled
Joe Rafaniello
@jrafanie
Oct 15 2015 21:02
I need to run it again to see which one it is
Jason Frey
@Fryguy
Oct 15 2015 21:02
(I think we thought of wrapping that in $vim_log.debug? to save time there)
Joe Rafaniello
@jrafanie
Oct 15 2015 21:03
it's easy enough to try
Jason Frey
@Fryguy
Oct 15 2015 21:04
btw, note that the code you pasted is literally copy paste in the next method
Joe Rafaniello
@jrafanie
Oct 15 2015 21:05
it must be really good code if it's copy and pasted ;-)
Jason Frey
@Fryguy
Oct 15 2015 21:05
:)
ok, I updated ManageIQ/manageiq#4894 to change the code slightly...got rid of the intermediate hash, and instead of deleting from the Hash I just shift it off the Array
keeps the code changes down and it's effectively the same
Greg Blomquist
@blomquisg
Oct 15 2015 21:17
interesting!
I like the gc hints
@fryguy so, you nil out data before save_target in case gc runs during saving and has a chance to clean that up?
Joe Rafaniello
@jrafanie
Oct 15 2015 21:20
yes, @blomquisg, exactly
Greg Blomquist
@blomquisg
Oct 15 2015 21:20
just wondering, 'cause now with the shift, it seems that data goes completely out of scope at the end of the loop .. but save_target can still be pretty weighty
yeah, I like
Joe Rafaniello
@jrafanie
Oct 15 2015 21:21
@blomquisg it's even worse with 2.2.3 because when we make data out of scope, there's very few allocations that occur after that, so no major GC's occur to free those objects
they're in the old generation and won't get GC'd until we do a manual GC.start, do lots of allocations, or the workers do their periodic GC.start
periodic GC.start of 15 minutes is really bad... probably need to make that more frequent not that we're using a generational GC
Greg Blomquist
@blomquisg
Oct 15 2015 21:25
man, I haven't heard "old generation" since the last time I touched Java (5 years ago?)
Joe Rafaniello
@jrafanie
Oct 15 2015 21:25
Kids these days
Greg Blomquist
@blomquisg
Oct 15 2015 21:27
haha
Joe Rafaniello
@jrafanie
Oct 15 2015 21:32
@Fryguy second spike was not broker log status
this is without it...
Jason Frey
@Fryguy
Oct 15 2015 21:32
k
Joe Rafaniello
@jrafanie
Oct 15 2015 21:32
miq_vim_broker_worker_9736 heap_live_slots heap_free_slots.png
Jason Frey
@Fryguy
Oct 15 2015 21:32
yeah, save_inventory is weighty enough that it will almost always end up forcing a GC
with data out of the way, it can reclaim all that and use that memory
ok, so about the same
Joe Rafaniello
@jrafanie
Oct 15 2015 21:34
i'm thinking it's in here
Jason Frey
@Fryguy
Oct 15 2015 21:34
I also considered removing the target object, but it won't make a difference because the target is held in the targets Array coming in
Joe Rafaniello
@jrafanie
Oct 15 2015 21:34
[----] I, [2015-10-15T17:16:34.755283 #9736:e6b98c]  INFO -- : MiqVimBroker.getMiqVim: returning new connection for 10.12.20.67_administrator@vsphere.local
[----] I, [2015-10-15T17:16:44.300646 #9736:f64a3c]  INFO -- : MiqVimBroker.getMiqVim: found connection for 10.12.20.67_administrator@vsphere.local
[----] I, [2015-10-15T17:16:44.300857 #9736:f64a3c]  INFO -- : MiqVimInventory(10.12.20.67, administrator@vsphere.local).getMoProp_local: calling retrieveProperties(SessionManager)
[----] I, [2015-10-15T17:16:44.301397 #9736:f64a3c]  INFO -- : HandSoap Request  [17659740]: length: [792], URI: [https://10.12.20.67/sdk], Content-Type: [text/xml;charset=UTF-8], SOAPAction: [RetrieveProperties]
[----] I, [2015-10-15T17:16:44.420211 #9736:f64a3c]  INFO -- : HandSoap Response [17659740]: length: [1030], HTTP-Status: [200], Content-Type: [text/xml; charset=utf-8]
[----] I, [2015-10-15T17:16:44.422483 #9736:f64a3c]  INFO -- : MiqVimInventory(10.12.20.67, administrator@vsphere.local).getMoProp_local: return from retrieveProperties(SessionManager)
[----] I, [2015-10-15T17:16:44.422618 #9736:f64a3c]  INFO -- : MiqVimBroker.getMiqVim: returning existing connection for 10.12.20.67_administrator@vsphere.local
note, getMiqVim takes 10 seconds
Jason Frey
@Fryguy
Oct 15 2015 21:34
yup
hmm
yeah, we can definintely optimize getMoProp
we should pair tomorrow
I'm not even sure what it's building...it seems like it's duping objects all over the place, so maybe there are some wins in that way too
Joe Rafaniello
@jrafanie
Oct 15 2015 21:39
i'm increasing my logging frequency to eliminate what other threads are doing, hopefully
Jason Frey
@Fryguy
Oct 15 2015 21:39
k
Joe Rafaniello
@jrafanie
Oct 15 2015 21:40
I think the monitorUpdatesInitial batching is doable to limit the first spike
Jason Frey
@Fryguy
Oct 15 2015 21:44
master is broken, so my PR is broken :(
I pinged the main repo room
Matthew Draper
@matthewd
Oct 15 2015 21:51
ManageIQ/manageiq#4911 kills a bunch of objects… I don't have any more memory-focused measurements for it, though
Jason Frey
@Fryguy
Oct 15 2015 21:52
Looks good @matthewd ...out of curiousity, the inverses just makes it so the objects reference each other instead of creating copies, right?
If so, shouldn't that just be Rails default? Or is it, and it just can't find the inverse in this case?
Matthew Draper
@matthewd
Oct 15 2015 21:54
Yeah, that :)
Jason Frey
@Fryguy
Oct 15 2015 21:54
ah cool
which part ;)
it's the rails default to do that?
Matthew Draper
@matthewd
Oct 15 2015 21:55
I didn't look into exactly why, but I think it comes down to our associations having extra options set (e.g, dealing with 'vmsand_' vs 'vm_or'), which defeat the guessing thingy
Jason Frey
@Fryguy
Oct 15 2015 21:55
oh i see
Matthew Draper
@matthewd
Oct 15 2015 21:57
I haven't done anything much to properly confirm it doesn't break things, but ManageIQ/manageiq#4912 seems like it could buy 10-20 MB per process, depending on how many models they end up actually needing
Jason Frey
@Fryguy
Oct 15 2015 22:03
nice
reviewing that one now
I'm not following the autoload vs require_dependency distinction
Matthew Draper
@matthewd
Oct 15 2015 22:06
It's really two separate changes, but the previous STI loader didn't work with autoload, so I had to shove them together
Jason Frey
@Fryguy
Oct 15 2015 22:06
Wouldn't autoload still trigger the require with the STI thing?
Matthew Draper
@matthewd
Oct 15 2015 22:06
Only if a child is needed
Jason Frey
@Fryguy
Oct 15 2015 22:06

As for the STI loader -- instead of loading all the children as soon as the parent is loaded, we can wait until the moment it matters: when someone (most likely, ActiveRecord) calls #descendants.

NICE! :)

Matthew Draper
@matthewd
Oct 15 2015 22:07
So previously, ManageIQ::Providers::Vmware::InfraManager would immediately load ..::Vm, ..::RefreshWorker, etc (via require_dependency)
Jason Frey
@Fryguy
Oct 15 2015 22:08
So, I think the module name AsConstMissingWithSti needs to change
Matthew Draper
@matthewd
Oct 15 2015 22:08
and ManageIQ::Providers::InfraManager (or even ExtManagementSystem) would trigger the load of the subclasses == the vmware inframanager, along with all its nested crap
Jason Frey
@Fryguy
Oct 15 2015 22:08
because that's not how it works anymore, right?
(sorry...wasn't trying to break your description on how it works)
Matthew Draper
@matthewd
Oct 15 2015 22:09
Right. I was being lazy for the smaller diff. :blush:
Jason Frey
@Fryguy
Oct 15 2015 22:09
that's fine...it can be cleaned separately
Matthew Draper
@matthewd
Oct 15 2015 22:10
Basically, with the previous combo of loaders, touching any of the 'interesting' models would load the whole lot. Obviously, how much this actually gains us depends on just how unnecessary that was.
Jason Frey
@Fryguy
Oct 15 2015 22:11
should we still keep the comment on why this is all needed?
Matthew Draper
@matthewd
Oct 15 2015 22:11
Probably. I almost left it hanging there unattached, but decided that would be even worse.
Jason Frey
@Fryguy
Oct 15 2015 22:12
I'm thinking it should move over to the ArDescendantsWithSti module and the intro paragraph tweaked alittle
Matthew Draper
@matthewd
Oct 15 2015 22:12
If we're happy that this seems safe enough as a late-stage change to help things out, I can go ahead and rename/restore some docs.
Jason Frey
@Fryguy
Oct 15 2015 22:12
ok
Matthew Draper
@matthewd
Oct 15 2015 22:13
(Meanwhile, with it out there, @jrafanie can pull it down if we want an overall-progress level in the morning)
Jason Frey
@Fryguy
Oct 15 2015 22:13
:+1:
I'm off...later everyone
Joe Rafaniello
@jrafanie
Oct 15 2015 22:15
actually, just was about to let it run on top of @Fryguy's other fix ManageIQ/manageiq#4894
Nice, i'm excited, I think we can chop off more from the broker too... will need to do cap and u workers tomorrow :-(
Matthew Draper
@matthewd
Oct 15 2015 22:16
Actually, come to think of it, my change is really more interesting in the context of @dmetzger57's whole-machine measurement, anyway ¯\_(ツ)_/¯
Joe Rafaniello
@jrafanie
Oct 15 2015 22:17
Well, the refresh workers are probably the ones that navigate back and forth on the tree the most
if I'm understanding the benefit of the change ;-)
Matthew Draper
@matthewd
Oct 15 2015 22:19
They're most likely to end up loading a nontrivial portion of the models anyway, if that's what you mean
So, while they should still have some gain, it'll possibly be the least benefit of the various processes. Maybe.
OTOH, they tend to be singularly one-provider-only, so on that basis, they might see more benefit from a more general worker. Or I might be wrong, and everything stays exactly the same. :P
Joe Rafaniello
@jrafanie
Oct 15 2015 22:21
I agree, I have no idea
Jason Frey
@Fryguy
Oct 15 2015 23:52
FYI, @tenderlove tweeted about two performance enhancements
Both seem to apply to rails master only though