These are chat archives for dropbox/pyston

2nd
Jun 2015
Travis Hance
@tjhance
Jun 02 2015 00:07
how many of times does it call free
Chris Toshok
@toshok
Jun 02 2015 18:45
i wonder how hard it would be to c++-ify libunwind and switch everything that would normally have used function pointers to using templates
Marius Wachtler
@undingen
Jun 02 2015 20:10
whats the reason?
llvm libunwind uses a c++ implementation imho...
and is now in a separate repo
Chris Toshok
@toshok
Jun 02 2015 20:16
each address space has a vtable of accessors. getting the value of a register or memory location requires a call through a function pointer
Chris Toshok
@toshok
Jun 02 2015 21:03
pyston crashes when run under cachegrind :(
Marius Wachtler
@undingen
Jun 02 2015 21:09
you could try with: --smc-check=all maybe valgrind has problems detecting our self modified code
Marius Wachtler
@undingen
Jun 02 2015 21:32
mmh just tried running the babel test with cpython: takes 26secs while pyston release needs 600secs... I suspect its so slow because of the large memory leak
Kevin Modzelewski
@kmod
Jun 02 2015 22:20
:/
do you know what's leaking?
Marius Wachtler
@undingen
Jun 02 2015 22:23
not yet :-(
Kevin Modzelewski
@kmod
Jun 02 2015 22:28
hmm what I've found helps in the past is to look at /proc/$PID/maps to see what section is growing
ie malloc vs the python heap
and then use massif (if it's malloc) or our heap profiling depending on what it is
hopefully that helps :/
Marius Wachtler
@undingen
Jun 02 2015 22:32
good idea will try. currently I'm using the memory stat output but that one looks reasonable
Marius Wachtler
@undingen
Jun 02 2015 22:59
Am I right that we currently don't free the redzone inside freeGeneratorStack but should free it?
mmh no I'm probably wrong :-D - from the maps output it looks like it deletes both entries.
Kevin Modzelewski
@kmod
Jun 02 2015 23:08
hmm we munmap the entire region including the redzone
I thought that would munmap both maps...
Marius Wachtler
@undingen
Jun 02 2015 23:11
It does sorry for the confusion. But it looks like we fail to free some generators.. not sure why but I'm looking into it
we don't free them always when they are not exhausted aka don't call explicit freeGeneratorStack before raising StopIteration
Chris Toshok
@toshok
Jun 02 2015 23:42
they should still get freed if the generator is garbage. we’re just pretty conservative about freeing the stack early
Marius Wachtler
@undingen
Jun 02 2015 23:46
The problem is the s_interpreterMap. If we don't terminate the generator we won't exit the interpreter -> won't remove the entry in the map. The GC visits all s_interpreterMap entries and we won't notice that the generator is in fact unreachable.
I don't know if this is the cause for the large leak I'm seeing with babel but this is drfinately a leak
Chris Toshok
@toshok
Jun 02 2015 23:53
ahh, yeah that definitely sounds like a problem
might this have something to do with the __hasnext__ change to generators?
maybe comment out the giveAttr(“__hasnext”, … from generator.cpp and see if that fixes the leak?