These are chat archives for dropbox/pyston

Nov 2015
Rudi Chen
Nov 30 2015 02:55
Tests are verifying by comparing their output against CPython's. I think expected_cache is just a cache of CPython's output.
So you got that right.
I have some instructions on running profiling against new changes here:
It's hard to write tests for optimizations unless there's a significant difference in speed (otherwise it's flaky) and it can be isolated changes.
You can't rely on absolute timing either so you'd need a before/after comparison, but then you need to keep the slow code around.
Some tests do check for optimization though.
Like the test (I think that was the name) which tests that the JIT creates and calls a fast path.
Marius Wachtler
Nov 30 2015 09:48

To test if the rewriter/tracing stuff works we have this statcheck lines e.g. test/tests/

# run_args: -n
# statcheck: noninit_count('slowpath_getattr') <= 20

Which compare the -s (stats output) with the specified one (-n means llvm jit every function instead of interpreting)

Nov 30 2015 09:50
Hi, what about the progress of refcounting and "frame introspection"? Seems "refcounting" branch is updated in 4 days ago.
Marius Wachtler
Nov 30 2015 09:53
kmod is working on refcounting I'm working on the frame introspection and signal handling
I'm still trying to reduce the slowdown
Marius Wachtler
Nov 30 2015 11:50
but concerning the signal handling: checking on every bytecode if a signal handler must be executed is very slow :-( (~10% regression) but I saw pypy does't do it either:
Greg Price
Nov 30 2015 17:12
@undingen I believe CPython only does it every 100 opcodes. See the call to Py_MakePendingCalls in ceval.c.
Marius Wachtler
Nov 30 2015 17:14
But it resets _Py_Ticker to 0 in Py_AddPendingCall so I think this forces a call to Py_MakePendingCalls
Greg Price
Nov 30 2015 18:42
Hmm, indeed.