Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
stuartarchibald
@stuartarchibald
Perhaps read all of this https://realpython.com/python-gil/ and then consider why releasing the GIL is useful ?
Vishesh Mangla
@XtremeGood
ok I will read and respond back
is it that the reference count of a single object can be decreased or increased at a particular instance of time?
stuartarchibald
@stuartarchibald
in part, but that's not really relevant to Numba, keep going with the doc
Vishesh Mangla
@XtremeGood
As you can see, both versions take almost same amount of time to finish. In the multi-threaded version the GIL prevented the CPU-bound threads from executing in parellel.
this is written there.
stuartarchibald
@stuartarchibald
yes indeed
Vishesh Mangla
@XtremeGood
so doesnt that mean the program couldnt parallelize if gil is not released?
got it its asynchrous thing
stuartarchibald
@stuartarchibald
right (for some definition of parallelize), perhaps take that example and make the threads call a Numba JIT function and then set nogil=True to see the effect
Vishesh Mangla
@XtremeGood
just like javascript
ok will surely see that
thanks for making me understand this
so if I set parallel = True the program would parallelize without releasing the gil but if I do nogil=True the program would release the gil too and there can be error if concurrently object reference count decreases
am I going right?
theres a risk releasing the gil but time taken would be decreased in nogil case since checking the lock is not required
stuartarchibald
@stuartarchibald
Numba translates python objects into a special native representation of the object for use in Numba compiled code, so there's no Python objects involved.
Vishesh Mangla
@XtremeGood
then you do not need the gil anyway if its not python code anymore.

so if I set parallel = True the program would parallelize without releasing the gil but if I do nogil=True the program would release the gil too and there can be error if concurrently object reference count decreases

this this line any correct?

stuartarchibald
@stuartarchibald
When parallel=True is set, Numba uses its own threading backend to do work. When nogil=True the GIL is also released across the JIT function call boundary. These are independent. There's no error as Numba doesn't use the objects from CPython.
That link you were reading about how numba works, "unboxing" and "boxing" is the process of translating CPython objects into native objects that Numba uses.
Vishesh Mangla
@XtremeGood
why shouldnt the gil always be released?
stuartarchibald
@stuartarchibald
because its costly releasing and acquiring a mutex when its not necessarily needed
Vishesh Mangla
@XtremeGood
ok so nogil should only be used when one requires a browser like effect like downloading something or reading a text file.
Its like javascript splits its single thread into 2and performs 1 tasks and the other one loads in the event loop
stuartarchibald
@stuartarchibald
nogil is useful when you have a e.g. multi-threaded python application and you want to run a load of Numba JIT compiled functions concurrently.
I'd really recommend doing this exercise: https://gitter.im/numba/numba?at=5e7e3da26eb8380abcde009b
Vishesh Mangla
@XtremeGood
ok doing it
Vishesh Mangla
@XtremeGood
import numba

@numba.vectorize("float32(float32, float32)", parallel=True)
def foo(nu, T):
    v = 45/T
    return nu *v

import numpy as np
from time import time

a = np.arange(1, 100000, 1)
t0 = time()
foo(a, 78)
print(time() - t0)

Exception has occurred: KeyError
"<class 'numba.npyufunc.ufuncbuilder.UFuncTargetOptions'> does not support option: 'parallel'"
  File "C:\Users\Dell\Desktop\numba temp\temp1.py", line 4, in <module>
    def foo(nu, T):
parallel is not working
stuartarchibald
@stuartarchibald
I think you want target='parallel' in the case of a vectorize function.
Vishesh Mangla
@XtremeGood
but I have a cpu with i5 processor
stuartarchibald
@stuartarchibald
I don't see how that is related.
Vishesh Mangla
@XtremeGood
I checked on the net and it showed it is should have 2 processors
2 processors multiprocessing
I know the difference between multi threading and processing hence there is also some confusion there too.

Exception has occurred: TypeError
ufunc 'foo' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
  File "C:\Users\Dell\Desktop\numba temp\temp1.py", line 13, in <module>
    foo(a, 78)
target="parallel"
Vishesh Mangla
@XtremeGood
I tried converting all number to float like 78 -> 78.0 but no good
Am I using vectorization the wrong way?
Vishesh Mangla
@XtremeGood
ok it worked
nogil took lesser time than gil both when parallel=True was used with njit
thanks for the help
stuartarchibald
@stuartarchibald
great, glad you got it working, and no problem
uchytilc
@uchytilc
@gmarkall Hey Graham, how is the return type determined within DeviceFunctionTemplate's inspect_ptx method?
uchytilc
@uchytilc
Ok so I played arount with DeviceFunctionTemplate, and what I am looking for is very straight forward. inspect_llvm, inspect_ptx, and compile all only take in args. If the default return type of None is removed from compile_cuda and replaced with an optional return type argument (with a default of None) and the caching into self._compileinfos is altered to be a concatenation of args and return_type then everything works fine (maybe I will try to push this at some point).
Amos Bird
@amosbird
Hello, does numba support generating atomic intrinsics ?
@stuartarchibald thanks for the suggestions on my naive kmeans implementation (I missed that msg)
Daniel Fort
@naquad
hi. i see that old version of numba (~0.11) supported interfacing C libraries. I would like to connect numba 0.48.0 with a C library but the problem is that the library wants double** pointers to data which I can't figure out how to provide. ctypes.pointer is not working in nonpython mode. is it possible to create a pointer of specified type in numba?