Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Graham Markall
@gmarkall
or perhaps a tuple of a dict and then a tuple of 10 integers
(I may have misread the above)
jbachh
@jbachh
Thank you very much. Yeah, I introduced the bug by assigning a tuple of a Dict and a float to landscapes.
jbachh
@jbachh

@gmarkall One other thing: Did you hear something about numba conflicting with the standard profiler in spyder?

This is the bottom of the error:

import landscape_generator as lg
File "C:/Users/--/Documents/--/landscape_generator.py", line 9, in <module>
from numba import njit
File "C:\Users--\Anaconda3\lib\site-packages\numba__init.py", line 14, in <module>
from numba.core import config
File "C:\Users--\Anaconda3\lib\site-packages\numba\core\config.py", line 16, in <module>
import llvmlite.binding as ll
File "C:\Users--\Anaconda3\lib\site-packages\llvmlite\binding\
init__.py", line 4, in <module>
from .dylib import *
File "C:\Users--\Anaconda3\lib\site-packages\llvmlite\binding\dylib.py", line 3, in <module>
from llvmlite.binding import ffi
File "C:\Users--\Anaconda3\lib\site-packages\llvmlite\binding\ffi.py", line 153, in <module>
raise OSError("Could not load shared object file: {}".format(_lib_name))
OSError: Could not load shared object file: llvmlite.dll

and I should also give the top half:

Traceback (most recent call last):
File "C:\Users--\Anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users--\Anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users--\Anaconda3\lib\cProfile.py", line 196, in <module>
main()
File "C:\Users--\Anaconda3\lib\cProfile.py", line 189, in main
runctx(code, globs, None, options.outfile, options.sort)
File "C:\Users--\Anaconda3\lib\cProfile.py", line 19, in runctx
return _pyprofile._Utils(Profile).runctx(statement, globals, locals,
File "C:\Users--\Anaconda3\lib\profile.py", line 62, in runctx
prof.runctx(statement, globals, locals)
File "C:\Users--\Anaconda3\lib\cProfile.py", line 100, in runctx
exec(cmd, globals, locals)
File "C:/Users/--/Documents/--/monte_carlo.py", line 11, in <module>

Andreas Sodeur
@asodeur

Hello, I bumped into something strange involving typed.Dict, parametrized types, and caching. The smallest reproducer I found so far is still pretty clunky. Any opinion on where to start looking? Caching logic or typed.Dict?

The code below crashes when run more than once, ie when data is loaded from cache. Setting cache=False (line 64) or having unique type names (see line 14) will always work :

from numba import generated_jit, jit, typed, types
from numba.core.datamodel.models import UniTupleModel
from numba.extending import box, register_model, typeof_impl, unbox


class Parametrized(tuple):
    def __init__(self, tup):
        assert all(isinstance(v, str) for v in tup)  # the actual work is already done by __new__


class ParametrisedType(types.Type):
    """this is essentially UniTuple(unicode_type, n) BUT type name is the same for all n"""
    def __init__(self, value):
        super(ParametrisedType, self).__init__('ParametrisedType')  # <- using a unique name (like f'ParametrisedType({value})') here will not crash
        self.dtype = types.unicode_type
        self.n = len(value)

    @property
    def key(self):
        return self.n

    def __len__(self):
        return self.n


register_model(ParametrisedType)(UniTupleModel)


@typeof_impl.register(Parametrized)
def typeof_unit(val, c):
    return ParametrisedType(val)


@unbox(ParametrisedType)
def unbox_parametrized(typ, obj, context):
    return context.unbox(types.UniTuple(typ.dtype, len(typ)), obj)


@box(ParametrisedType)
def box_parametrized(typ, val, context):
    tup = context.box(types.UniTuple(typ.dtype, len(typ)), val)
    cls = context.pyapi.unserialize(context.pyapi.serialize_object(Parametrized))
    obj = context.pyapi.call_function_objargs(cls, (tup,))
    context.pyapi.decref(tup)
    return obj


@generated_jit
def dict_vs_cache_vs_parametrized(v):
    typ = v

    def objmode_vs_cache_vs_parametrized_impl(v):
        # typed.List shows same behaviour after fix for #6397
        d = typed.Dict.empty(types.unicode_type, typ)
        d['data'] = v

    return objmode_vs_cache_vs_parametrized_impl


def test_dict_vs_cache_vs_parametrized():
    # crashes when run more than once (ie when compiled function is loaded from cache)
    x, y = Parametrized(('horst', 'hanz')), Parametrized(('horst',))

    @jit(nopython=True, cache=True)
    def get_unit_system_data(x, y):  # <- does not crash when typeof(x) == typeof(y)
        dict_vs_cache_vs_parametrized(x)
        dict_vs_cache_vs_parametrized(y)

    for ii in range(50):  # <- somtimes works a few times
        assert get_unit_system_data(x, y) is None


if __name__ == '__main__':
    test_dict_vs_cache_vs_parametrized()
3 replies
Graham Markall
@gmarkall
@jbachh I can't see anything in the issue tracker about it... I wonder how Spyder is launching Python here - it seems that llvmlite.dll may not be in the dll search path when it runs under its profiler
rohaniitj
@rohaniitj

I have a question about the execution of code using Numba in Spyder with respect to following scenario: When I first time run my code with @jit(nopython=True, cache=True) on a data X, the code is executed in reduced time, say 82 seconds. When I re-execute the code again with same options with @jit, the time of execution further reduces to 66 second and it remain nearly same on re-executing multiple times. I think that due to execution of same compiled code in the past (whose compiling time was included in 82 seconds), the time is reduced on executing the code again (with 66 seconds) . When I am executing the same code again on data Y, it is being executed first time on 123 seconds and re-execution of same code on same data multiple times is executed in nearly 102 seconds each time. I am under the impression that the whole code is executed again with respect to data Y.

Now, my question is that, if I have already compiled the code, I can vary the data as X or Y but the execution may vary with respect to these data sets only. But it seems to me that it is compiled first time with 82 seconds for first data and 123 seconds for 2nd data it is re-compiled, rather than using the old compiled function (because @jit is applied basically on a function). Am I wrong somewhere.

21 replies
DTKx
@DTKx
Hey! I have started to learn how to use numba and I came up with several type issues, kept tracing and found that I it was related to conversion from np.int64. "Cannot cast array(int64, 1d, C) to int32" Do anyone have a light of what could I try? This is an example
@jit(nopython=True,nogil=True)   
def t_corrido_i():
    grafo_e=np.array([[0,1,12],[0,2,12],[0,3,12]],dtype=np.int32)
    tarefas_pj=np.array([ 0,  9, 13, 15, 16])

    requere_nodes=grafo_e[np.where(grafo_e[:,1]==3)][:,0]

    # Alternative for np.isin for numba
    requere_no_outro_proc=np.array([x in set(requere_nodes) for x in tarefas_pj])

    ix_nos_requeridos=np.int32(np.where(requere_no_outro_proc)[0])

t_corrido_i()
1 reply
Graham Markall
@gmarkall
Anyone else seeing a blank preview pane when replying to a thread in discourse?
5 replies
Jack O'Brien
@Rodot-
Can anyone tell me how to install 0.52 with conda? I'm not seeing it on any of my channels
5 replies
Christopher
@Hellgrammite00_twitter
Trying to figure out a solution for this in Numba. Basically I have a formula that sometimes takes arrays or just takes floats. I need to apply a max/min to the final calc, and I want it to use either a standard python min/max function, or something like a np.where if an array. The best way to do this would be to identify type in an if statement. Not sure how to do this since things like isinstance() don't work in @njit.
4 replies
Michael Kummer
@randompast_twitter
I tried a numba of different ways to speed up some code. Cuda get some help? I ended up implementing the function into cusignal (pending pull request). It's so simple, yet it's ~100x slower with cuda.jit. My cuda.jit versions seem consistent with a lot of examples, but it's likely that I'm missing something obvious. https://gist.github.com/randompast/73b23a7d2560305be8bddb3a2b9f3a53
6 replies
c200chromebook
@c200chromebook
these kind of recursion shenanigans are not possible in nopython mode right?
@cuda.jit(nb.int64(nb.int64), device=True)
def simp_recurse(x):
    if x <= 0:
        return 0
    else:
        return simp_recurse(x-1) + x
1 reply
jbachh
@jbachh

As per my message of Oct 20 16:07, again the same error. This time it is not when running the profiler (that error remains the same), but when using multiprocessor.Pool. I have also tried to run the script from the anaconda prompt instead from spyder, same error. Anyone an idea how to fix this?

Traceback (most recent call last):
File "C:\Users-\Documents\research-2\research2\monte_carlo.py", line 11, in <module>
import landscape_generator as lg
File "C:\Users-\Documents\research-2\research2\landscape_generator.py", line 9, in <module>
from numba import njit
File "C:\Users-\Anaconda3\lib\site-packages\numba__init.py", line 14, in <module>
from numba.core import config
File "C:\Users-\Anaconda3\lib\site-packages\numba\core\config.py", line 16, in <module>
import llvmlite.binding as ll
File "C:\Users-\Anaconda3\lib\site-packages\llvmlite\binding\
init__.py", line 4, in <module>
from .dylib import *
File "C:\Users-\Anaconda3\lib\site-packages\llvmlite\binding\dylib.py", line 3, in <module>
from llvmlite.binding import ffi
File "C:\Users-\Anaconda3\lib\site-packages\llvmlite\binding\ffi.py", line 153, in <module>
raise OSError("Could not load shared object file: {}".format(_lib_name))
OSError: Could not load shared object file: llvmlite.dll

jbachh
@jbachh
Anyway, it seems that llvmlite only loads when running a script directly from spyder, not outside, that stops it making a numba problem, nevertheless if anyone has a suggestion, please let me know.
16 replies
jbachh
@jbachh
Is this a legit bug? After a clean anaconda reinstall, running this code runs fine. But when un-commenting the numba import (without even adding the njit decorator to the method) it gives the error OSError: Could not load shared object file: llvmlite.dll
# from numba import njit
from multiprocessing import Pool


def f(x):
    y = 0
    for _ in range(10000):
        y += x * x
    return y


def main():
    with Pool(processes=2) as pool:
        results = list(pool.imap_unordered(
                f, [1, 2]))
    print(results)


if __name__ == "__main__":
    main()
jbachh
@jbachh
This works fine too:
from numba import njit
from multiprocessing import Pool


@njit
def f(x):
    y = 0
    for _ in range(10000):
        y += x * x
    return y


def main():
    # with Pool(processes=2) as pool:
    #     results = list(pool.map(f, [1, 2]))
    results = f(2)
    print(results)


if __name__ == "__main__":
    main()
jbachh
@jbachh
ps. I am on Windows 10, just a fresh standard anaconda install, don't know what could be wrong with my pc.
jbachh
@jbachh
I found the solution, when doing a fresh anaconda install on windows, it will ask to select the not-recommend option to add anaconda to the path. Do that anyway, and the problem is fixed.
2 replies
SleepingPills
@SleepingPills
Anyone knows off the top of their head if creating a list of tuples inside nopython jit is supported?

E.g.

@njit
def test():
    lst = typed.List.empty_list((types.int64, types.int64))
    lst.append((1,1))
    return lst

Fails

Siu Kwan Lam
@sklam
that should work
SleepingPills
@SleepingPills
Cannot safely cast tuple(int64 x 2) to tuple(class(int64) x 2)
It looks like for some odd reason it construct a list of tuple of class, not tuple of int
Siu Kwan Lam
@sklam
right, the problem is in the spelling of the item type for the list.
you can just let numba infer the types
In [1]: from numba import typed, njit

In [2]: @njit
   ...: def test():
   ...:     lst = typed.List()
   ...:     lst.append((1,1))
   ...:     return lst
   ...:

In [3]: test()
Out[3]: ListType[UniTuple(int64 x 2)]([(1, 1)])
SleepingPills
@SleepingPills
Thanks - that works in case of a simple jit function, but it won't work for a jitclass
In a jitclass constructor I have to specify the type

the problem is in the spelling of the item type for the list.

Sorry what do you mean by this?

There is a typo?
Running the exact same code outside of a jit function works fine

lst = typed.List.empty_list((types.int64, types.int64))

Works perfectly fine directly in the python interpreter

Siu Kwan Lam
@sklam
the item type needs to be spelled outside of the function
a limitation of jit
In [1]: from numba import typed, njit, types

In [2]: tup_type = types.Tuple.from_types((types.int64, types.int64))

In [3]: tup_type
Out[3]: UniTuple(int64 x 2)

In [4]: @njit
   ...: def test():
   ...:     lst = typed.List.empty_list(tup_type)
   ...:     lst.append((1, 1))
   ...:     return lst
   ...:

In [5]: test()
Out[5]: ListType[UniTuple(int64 x 2)]([(1, 1)])
SleepingPills
@SleepingPills
ahh I see what you mean
Ugh
The error message is super unhelpful :(
Thanks, that clarifies things at least
stuartarchibald
@stuartarchibald
Can you describe what the error message would contain that would have been helpful?
SleepingPills
@SleepingPills
It's a good question, I suppose given I don't know why this limitation arises, it's tricky to say what would be a technically feasible error message. If it's a known limitation that tuples cannot be declared this way inside jit, perhaps something like "List constructors currently do not accept tuple type definitions inside jit functions"?
But generally, I had zero indication that I should be defining the tuple type outside of the function to make it work

I mean, the error message I got was:

Cannot safely cast tuple(int64 x 2) to tuple(class(int64) x 2)

So the way I understood it is that for some reason it's constructing a tuple of classes instead of a tuple of ints
And I scoured the documentation to find if there is some caveat for this but came up empty handed
stuartarchibald
@stuartarchibald
Thanks, will see what we can do. I think this is hookable in type inference.
SleepingPills
@SleepingPills
Thanks great stuff. In general, I feel like the Numba error messages could be greatly improved, but as I don't understand the internals at all I don't even know where to begin helping.
stuartarchibald
@stuartarchibald
It's a constantly improving thing, it's also unfortunately really difficult both in trying to accurately work out what it was the user was attempting and in trying to convey what the problem is through an involved compiler pipeline. It's not like there's a language spec or an expectation of what's compilable to fall back on either as the supported region of Python/NumPy is that which is statically analysable, and that's a challenging thing to describe.