Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Da LI
@dlee992
Aha, I guess I have to use jitclass or structref to to this kind of functionality? Does there exist another path to fulfill this goal?
3 replies
Chegg User
@chegguser7798_gitlab
Can someone please help implement a PriorityQueue in numba with specified element types?
https://stackoverflow.com/questions/74469770/how-to-implement-a-priority-queue-in-numba-where-one-element-of-the-item-is-a-li
4 replies
joshuallee
@joshuallee
I have a njit function that makes use of a collection of 16 namedtuples (read-only) with a parallel loop.
The function is fastest when I pass the namedtuples as separate arguments to the function, but this is not ideal and becomes very messy.
I have tried the following data types to store them to make them easier to use.
  • numba list
  • numba dict
  • tuple
  • structref
  • namedtuple
    But all are too slow or don't work (structref is the best but still over twice as slow)
    Does anyone have any suggestions as to what I can use here to try and reflect the behavior of passing the objects separately?
11 replies
Tesla42
@tesla42:matrix.kraut.space
[m]
hi
I have a function with list indexing. It does not parallelize.
Why?
Tesla42
@tesla42:matrix.kraut.space
[m]
This function will not parallelize
1 reply
Tesla42
@tesla42:matrix.kraut.space
[m]
with parallel=True
but I also tied to preallocate the target list and used a for prange statement
Tesla42
@tesla42:matrix.kraut.space
[m]
:point_up: Edit: but I also tried to preallocate the target list and used a for prange statement
Graham Markall
@gmarkall
You'll need to use NumPy arrays, not lists
Tesla42
@tesla42:matrix.kraut.space
[m]
ok
Chegg Prof
@chegguser3402_gitlab
Is there anything like set() in numba (i tried numba.typed.Set() but it does not seem to be supported)? I need a data structure that allows to check its contents in constant time.
I am trying to achieve this functionality with numba:
>>> set_of_tuples = set()
>>> set_of_tuples.add((1, 1))
>>> set_of_tuples.add((1, 0))
>>> (1,1) in set_of_tuples
True
Siu Kwan Lam
@sklam
for 2-tuple of int, the existing set support should be fine
if you are hitting problems, the workaround is probably to use TypedDict by only using the keys and ignore the values. https://numba.readthedocs.io/en/stable/reference/pysupported.html#typed-dict
Joan Saladich
@joan.saladich_gitlab

Hi, I am trying to pass a global numba typed Dict containing strings (keys) and numpy objects (values) into a numba njit function. My goal is copying the same behaviour than native Python, yet I can't make it with the typed Dict.
Can I put a numpy function in a typed Dict (e.g. np.mean)?

Thanks a lot!

19 replies
Albert
@xalbt
Hi, I want to use carray in the numba cuda jit, but it doesn't seem to be available. Is there similar functionality I can exploit so test_cuda in the following snippet compiles? If not, how can I implement carray for numba cuda? I also have the same questions for cffi.from_buffer to get the pointer of a cuda array. Thanks!
A = np.dtype([("x", "f4")], align=True)
A_t = nb.from_dtype(A)

@nb.cfunc(nt.float32(nt.CPointer(A_t)))
def test(o):
    return nb.carray(o, 1)[0].x

@cuda.jit(nt.float32(nt.CPointer(A_t)), device=True)
def test_cuda(o):
    return nb.carray(o, 1)[0].x
3 replies
Tesla42
@tesla42:matrix.kraut.space
[m]
Hi, again
the parallelization attempt didn't work
I convert everything to numpy arrays before:
and only use numpy arrays in the function
what is wrong here?
Tesla42
@tesla42:matrix.kraut.space
[m]

```---------------------------Loop invariant code motion---------------------------
Allocation hoisting:
No allocation hoisting found

Instruction hoisting:
loop #0:
Has the following hoisted:
$expr_out_var.56 = const(float64, 0.0)
Failed to hoist the following:
dependency: $parfor_index_tuple_var.57 = build_tuple(items=[Var($parforindex_50.179, <string>:2), Var($parforindex_51.181, <string>:3)])
loop #1:
Has the following hoisted:
$60load_global.7 = global(getSurfacePnt: CPUDispatcher(<function getSurfacePnt at 0x7f9e4a7094c0>))
$const68.11 = const(int, 0)
$const78.16 = const(int, 1)
Failed to hoist the following:
dependency: $54binary_subscr.5 = getitem(value=cutCedgeIdxList, index=$parfor__index_58.221, fn=<built-in function getitem>)
dependency: p = getitem(value=edge2ptIdxList, index=$54binary_subscr.5, fn=<built-in function getitem>)
dependency: $70binary_subscr.12 = static_getitem(value=p, index=0, index_var=$const68.11, fn=<built-in function getitem>)
dependency: $80binary_subscr.17 = static_getitem(value=p, index=1, index_var=$const78.16, fn=<built-in function getitem>)

dependency: $84call_function.19 = call $push_global_to_block.222(func, $72binary_subscr.13, $82binary_subscr.18, func=$push_global_to_block.222, args=[Var(func, render.py:200), Var($72binary_subscr.13, render.py:220), Var($82binary_subscr.18, render.py:220)], kws=(), vararg=None, varkwarg=None, target=None)

Tesla42
@tesla42:matrix.kraut.space
[m]
Is this getitem() thing the problem?
Tesla42
@tesla42:matrix.kraut.space
[m]
🐱
Tesla42
@tesla42:matrix.kraut.space
[m]
ok, now I see, it looks like the data type conversion before takes 20 times more time than the actual function
Tesla42
@tesla42:matrix.kraut.space
[m]
I managed to make it faster now:
Siu Kwan Lam
@sklam
@tesla42:matrix.kraut.space , I highly suggest using https://numba.discourse.group/ for longer questions. Gitter UI makes it hard to follow long discussion.
Shikha-png
@Shikha-png
ValueError: cannot assign slice from input of different sizes.
Anyone has idea on this error.
4 replies
Da LI
@dlee992
Hi, guys. I want to ask how numba handle integer operation overflow? For example, a and b are int64, then a function returns a+b, how do we ensure that there is no overflow happened. Or Numba just ignore it, let it become an undefined behavior?
2 replies
Tesla42
@tesla42:matrix.kraut.space
[m]
parallel writes (prange) to dicts with tuple as key crashes it.
How to parallelize the expressions in line 188 and 226?
2 replies
Tesla42
@tesla42:matrix.kraut.space
[m]
The numba optimizer will replace 2**32 by bitshift, right?
Tesla42
@tesla42:matrix.kraut.space
[m]
To replace the dicts, I need some sparse array or sparse matrix.
Is this available?
Da LI
@dlee992
Hi, guys. I raised a discussion last week, https://numba.discourse.group/t/feature-request-about-supporting-arrow-in-numba/1668. Considering the prevalence of Arrow project, maybe we should carefully consider about the supporting for Arrow data and related computation. Any idea? I would like to contribute in this aspect, if we can achieve some kind of aggrement about which arrow parts could support, which parts are not.
Tesla42
@tesla42:matrix.kraut.space
[m]
But the index is sparse.
That means I have to implement my own dictionary by hashing the key?
Tesla42
@tesla42:matrix.kraut.space
[m]
And Lists are also much slower than arrays?
luk-f-a
@luk-f-a
please open a thread on discourse, gitter is good for short q&a but does not work very well for long discussions.
ldeluigi
@ldeluigi

Hi, I just found about about numba, and I'm wondering how should I use it if my code dependends on a third party library (which is in pure Python).
More specifically, my code is implemented as a class, which contains some method I'd like to to JIT compile, but these methods make calls to some methods of a wrapped class which comes from a library.

Is there a way to make the jit annotation act "recursively" on called methods and/or wrapped classes?

2 replies