Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
brandonwillard
@brandonwillard:matrix.org
[m]
you can surround portions of code with those
black can definitely be set up to run on the entire codebase without breaking things
by ignoring certain files and parts of files, of course
so someone would have to do that work
I can give it a try, but, from my projects, I know it can be annoying to review huge sets of formatting-based diffs
9 replies
and it's often better to have a trusted org member do that kind of thing
brandonwillard
@brandonwillard:matrix.org
[m]
there's always some subjectivity to these things, but the regularity is what makes it useful
especially for git diffs and the like
Michael Collison
@testhound

Pinging on two PR requests that are waiting on CI:

numba/numba#6889

numba/numba#6995

Siu Kwan Lam
@sklam
There were a few challenging bugs that accidentally got into mainline and we spent the entire week to fix those. As a result, we are slow responding to issues and PRs. We are exhausted for the week and will catch up next week.
Michael Collison
@testhound
Appreciate the update.
Graham Markall
@gmarkall
Am I correct in thinking that one has to use the low-level extension API for attributes still, i.e. that there's no @overload or @intrinsic for them? For example, cuda.warpsize is presently implemented with the low-level API (typing, lowering, plus a stub) but using @overload or @intrinsic for it forces it to be a function (AFAICT)
3 replies
Graham Markall
@gmarkall
When's the next merge window? (I plan to be around for some quick CUDA conflict resolution during it)
stuartarchibald
@stuartarchibald
Should probably aim for about 3-4 hrs from now, #7003 is priority and I think ought to go through by itself to give an idea of how stable mainline is, once the build for that starts everything else ready to merge can go in.
Graham Markall
@gmarkall
ok... will be as available as possible during that time
stuartarchibald
@stuartarchibald
Thanks. I just octomerged https://github.com/numba/numba/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc+label%3A%225+-+Ready+to+merge%22 with #7003 and based of current mainline, seemed to be ok from a conflict point of view.
Graham Markall
@gmarkall
ah, ok :-)
Graham Markall
@gmarkall
I just realised that the CUDA target libs test (python -c "from numba import cuda; cuda.cudadrv.libs.test()") checks for cublas, cusparse, cufft, and curand. However I'm not sure they're used for anything - are they just a hangover from before pyculib was a separate package? (@sklam?)
2 replies
(can't pickle _thread.lock objects)
Michael Collison
@testhound

Ping on the PR; approved but waiting on CI:

numba/numba#6889

1 reply
Michael Collison
@testhound
2 replies
Graham Markall
@gmarkall
Why do we use lists for the members of a StructModel? Is this just for "symmetry" with those classes that declare members using a list comprehension? https://github.com/numba/numba/blob/master/numba/core/datamodel/models.py#L723-L726
6 replies
esc
@esc
meeting notes from LAST week (just catching up now as I was off): https://github.com/numba/numba/wiki/Minutes_2021_05_11
esc
@esc
A periodic reminder, that I maintain some dev-files for Numba here https://github.com/esc/numba-dev-files (in case anyone would like to join in and make developing Numba slightly more convenient)
(doing a full rebuild with make clean && make and installing dependencies with make deps in a fresh environment -- does that not sound enticing to you?)
Graham Markall
@gmarkall
I just noticed some repos have a github-actions bot marking issues as "inactive-30d", "inactive-90d" etc - would that be good for Numba to have for us to have a handy list of things that might be stale and potentially closed? (e.g. if I periodically got an idle moment I might go through it and clear out old questions, etc)
8 replies
Michael Collison
@testhound

Ping for review now that test failures are resolved:

numba/numba#6889

esc
@esc
Graham Markall
@gmarkall
Master is currently broken for CUDA, as a result of #6948 - this was discovered in the buildfarm run for #6840. Fix is in #7052 .
Michael Collison
@testhound
Checking on the status of merging the guvectorize PR: numba/numba#6889 which is in the "BuildFarm Passed" state.
Itamar Turner-Trauring
@itamarst
hello! anything I can do to help with reviewing numba/numba#6928 I have a bunch more improvements I could make to numba but want some validation that I'm not just implementing garbage :grinning:
meeting notes from last night :point_up:
esc
@esc
The new "stale action" to mark stale issues is now in effect, maybe y'all have seen it already.
It's an experiment at automagic tracker cleanup, we are not sure yet if it will be good, so we are giving it a test-.run now
The PR that merged it was #7040
Itamar Turner-Trauring
@itamarst
if anyone could comment on https://github.com/numba/numba/pull/6928#discussion_r639743934 that'd be helpful, I don't really know how to approach it
23 replies
esc
@esc
2 replies
Itamar Turner-Trauring
@itamarst
is there any way for me as a non-committer to remove the "Waiting for author" label on PRs?
3 replies
Jim Pivarski
@jpivarski

@sklam Following up on my question from the meeting, I haven't been able to convert my generated_jit recursion into overload (you need to have a Python function to overload, and I don't know what to put there). This expresses my intention:

import numba as nb
import numpy as np

@nb.generated_jit(nopython=True)
def add_one(data):
    if data.ndim == 1:
        def impl(data):
            for i in range(len(data)):
                data[i] += 1
        return impl
    else:
        def impl(data):
            for x in data:
                add_one(x)   # the typer should know that x has one less dimension than data because x is an iterator of data
        return impl

array = np.array([1, 2, 3])
add_one(array)
array

array = np.array([[1, 2], [3, 4]])
add_one(array)    # fails with NotImplementedError: Failed in nopython mode pipeline (step: nopython frontend) call to CPUDispatcher(<function add_one at 0x7f2ae1147dc0>): unsupported recursion
array

I can make an issue if this is something that should be made possible. The broader context is that I want to publicize a technique of breaking down Awkward Arrays whose type structure is known in the Numba typing pass, but the code the user writes is more generic (putting type-based choices in the typing pass, hence the use of generated_jit).

Maybe the need for this was weaker when only dealing with NumPy arrays (since they can be flattened with np.ravel), but it becomes more important for Awkward Arrays.

9 replies
esc
@esc
https://github.com/numba/numba/wiki/Minutes_2021_06_08 <-- meeting notes from last night
Graham Markall
@gmarkall
Is _make_subtarget used in the CPU target? I ask because I'm trying to make the CUDA target context a singleton, but it seems this relies on it not being a singleton https://github.com/numba/numba/blob/master/numba/core/compiler.py#L345
1 reply
esc
@esc
@hugohadfield -- I am having some issues with the clifford <-> numba integration testing, I'm trying to work out if it is Numba or Clifford, would you have a minute to take a look.
cls = <class 'clifford._mvarray.MVArray'>
input_array = [(0.20273^e123) + (0.17273^e124) + (0.17492^e125) + (0.23452^e134) + (0.23707^e135) - (0.00036^e145) + (11.40541^e234)... + (5.23994^e134) + (5.2788^e135) + (0.51093^e145) - (1.54581^e234) - (1.55196^e235) - (0.21276^e245) + (0.10048^e345)]

    def __new__(cls, input_array):
        obj = np.empty(len(input_array), dtype=object)
>       obj[:] = input_array
E       TypeError: __array__() takes 1 positional argument but 2 were given

clifford/_mvarray.py:19: TypeError
9 replies
@hameerabbasi I am looking at pydata/sparse 0.12.0 and it seems to have some breaking tests that are fixed already on master. Do you happen to have an ETA on when a new version will be tagged?
Guilherme Leobas
@guilhermeleobas
Hey folks, is there a problem with Numba CI today? Some jobs are failing in the before install step
stuartarchibald
@stuartarchibald
Seems like there's something unknown going on, will take a look.
Graham Markall
@gmarkall
What was the thinking around the fastmath flag? Should the fastmath flag for the outermost call be the one that's respected, or the fastmath flag on each individual function?
10 replies
luk-f-a
@luk-f-a
gitter needs a mute thread option
13 replies
esc
@esc
Good morning!