pip install -U pythran(or the equivalent command with conda) and to use the clang compiler.
We have to add warnings to explain this kind of unexpected behavior!
do you have a channel?
why didn't the boost decorator call numba's jit?
fastmathargument for Transonic decorators, so that it would be possible to use
fastmath=Truefor the Numba backend. It would be very simple to implement, but it is not our priority in terms of development. Again, PR welcome.
@boostto the functions used by the boosted functions), add
@njitto all functions and use the corresponding jitted function for the boosted functions! At the end, you exactly get the Numba function so there is no overhead.
In https://numba.pydata.org/numba-doc/dev/user/jitclass.html it's presented the jitclass decorator. But in limitations says that: Support for jitclasses are available on CPU only. (Note: Support for GPU devices is planned for a future release.)
The way I'm using Numba is by defining normal CPU classes/functions and adding @jit decorator for those functions that I want to be executed on GPU. Why would I like to decorate the classes defined for CPU?
@cuda.jit, not @jit
(it doesn't let me edit the comment)