Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Henry Schreiner
    @henryiii
    From what I understand here, this will also be picked up from CXX if not given, so it really seems like it should be g++. Never tried passing it explicitly either.
    Henry Schreiner
    @henryiii
    Looks like the problem is a bug in Python: https://bugs.python.org/issue23644 - it seems to be trying to build with stdatomic, which is C++ only.
    Matthew Feickert
    @matthewfeickert

    From the looks of it @henryiii's and @daritter 's suggestion of removing the with-cxx-main flag seems to do the trick. I rebuilt and was able to build through iminuit in the container. :+1: I'll need to do some experimentation with the configure options, but I found the following from the old SVN Python 2.7 trunk very helpful

    --with-cxx-main=<compiler>: If you plan to use C++ extension modules, then -- on some platforms -- you need to compile python's main() function with the C++ compiler. With this option, make will use <compiler> to compile main() and to link the python executable. It is likely that the resulting executable depends on the C++ runtime library of <compiler>. (The default is --without-cxx-main.)

    There are platforms that do not require you to build Python with a C++ compiler in order to use C++ extension modules. E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such a platform. We recommend that you configure Python --without-cxx-main on those platforms because a mismatch between the C++ compiler version used to build Python and to build a C++ extension module is likely to cause a crash at runtime.

    The Python installation also stores the variable CXX that determines, e.g., the C++ compiler distutils calls by default to build C++ extensions. If you set CXX on the configure command line to any string of non-zero length, then configure won't change CXX. If you do not preset CXX but pass --with-cxx-main=<compiler>, then configure sets CXX=<compiler>. In all other cases, configure looks for a C++ compiler by some common names (c++, g++, gcc, CC, cxx, cc++, cl) and sets CXX to the first compiler it finds. If it does not find any C++ compiler, then it sets CXX="".

    Similarly, if you want to change the command used to link the python executable, then set LINKCC on the configure command line.

    Kinda unfortunate that I can't find that level of detail in modern Python, but maybe I'm not searching hard enough through CPython's GitHub

    Matthew Feickert
    @matthewfeickert

    @henryiii This information on https://bugs.python.org/issue23644 is also very nice. Thanks for taking the time to go find it!

    I don't think that CPython can be built by g++. - STINNER Victor

    This was exactly why I originally had it set to which gcc, but it seems that not explicitly setting compiler flags is the way to go.

    Matthew Feickert
    @matthewfeickert

    @daritter @henryiii Thanks to your help the problem is now resolved: matthewfeickert/Docker-Python3-Ubuntu#3

    @HDembinski Please ignore my ping as the issue no longer exists.

    Henry Schreiner
    @henryiii
    I have just released a series of posts over Azure DevOps, ending with a tutorial on building wheels for a non-trivial binary package (boost-histogram). Start here if you are interested!
    Matthew Feickert
    @matthewfeickert
    These look beautifully written @henryiii!
    Patrick Bos
    @egpbos
    impressive posts @henryiii!
    benkrikler
    @benkrikler
    @henryiii I tried setting up Azure CI with github for a project a few days ago and looked at your work in scikit-hep/particle for some guidance. I couldn't get llvmlite to install for linux and Python2 however, have you faced this issue at all?
    Henry Schreiner
    @henryiii
    I haven’t tried (I guess this is for Numba?), but llvmlight does at least has a manylinux1 python2.7 wheel, so I would have expected it to download that and “Just Work”. What problem were you seeing?
    benkrikler
    @benkrikler
    Yes, thats right, it was for Numba. I was installing that from pip, but I didn't try conda, actually; I suppose that would be more reliable. In the default linux agent it was only finding LLVM 3.8, but llvmlite requires 7.0 or greater.
    Henry Schreiner
    @henryiii
    The easiest way would be to add a container image and use docker - should be a one line addition. I'll check a pip installer later. Python 3 worked, I guess from your comment?
    Contrary to what you might expect, llvmlite does not use any LLVM shared libraries that may be present on the system, or in the conda environment. The parts of LLVM required by llvmlite are statically linked at build time. As a result, installing llvmlite from a binary package does not also require the end user to install LLVM. (For more details on the reasoning behind this, see: Why Static Linking to LLVM?)
    Henry Schreiner
    @henryiii
    That's for pip too. So don't know why it would try to build, and it it didn't, it should not care what version of LLVM is present.
    benkrikler
    @benkrikler
    I tried using a centos7 image as well but it gave me some other headache, which I've now forgotten. It worked fine for python3, only for python2 was it an issue, which also seemed strange to me.
    Hans Dembinski
    @HDembinski
    @matthewfeickert Sorry for not responding sooner. I am currently on holidays, but it seems that your problem was resolved, thanks everyone who jumped in!
    Tai Sakuma
    @TaiSakuma
    how can you tell which Python versions are supported by each coda-forge package?
    Henry Schreiner
    @henryiii
    Conda forge always supports 2.7, 3.6, and 3.7, unless a package turns one off. You can always check anaconda.org to see what files are listed, like: https://anaconda.org/conda-forge/root/files (Very similar to the way I checked for Python 2.7 wheels on PyPI above)
    Tai Sakuma
    @TaiSakuma
    i see. thank you
    Jonas Eschle
    @jonas-eschle
    You can also use conda search package_name (plus some selection if you like, see options) to list the package and it's builds. There you see which version is available for which Python version (mostly)
    e.g. to find keras in conda-forge, do conda search keras[channel=conda-forge]
    Tai Sakuma
    @TaiSakuma
    ok. thank you
    Charles Escott
    @EscottC
    Hi
    Jim Pivarski
    @jpivarski
    @EscottC Hi!
    Tai Sakuma
    @TaiSakuma
    Hi
    Tai Sakuma
    @TaiSakuma
    how would you midify __repr__ of a function? would you do this?
    Jonas Eschle
    @jonas-eschle
    This seems the way to go, since repr is implemented as type(func).__repr__(func) with func being your function. Therefore, some kind of wrapping is necessary. Either with a function wrapper as desribed in your link or, depending on your context you may even need to change more, by creating a class instead of a function and implement the __call__.
    Tai Sakuma
    @TaiSakuma
    Thanks. I have been trying. repr() works. But the problem is the decorated function is not picklable.
    Jonas Eschle
    @jonas-eschle

    Did you use the functools.wraps? Should be used anyway for any wrapping, like:

    def decorator(func):
        @functools.wraps(func)
        def new_func():
            print(f"Wrapped {func}")
        return new_func

    the problem arises from name clash, e.g. your wrapped function has the same name as the unwrapped. functools.wraps solves that. Explanation e.g. here

    Tai Sakuma
    @TaiSakuma
    Yes. That is what I have been trying. I'm not sure where to put wraps.
    The example wraps a function with another function. I need to wrap a functiona with a class (with __call__()).
    Henry Schreiner
    @henryiii
    This should get you pretty close, but does not copy the signature; you’ll need something more powerful for that, like wrapt or decorator.
    In [11]: def nice_repr(func):
        ...:     class NiceFunction:
        ...:         def __repr__(self):
        ...:             return f"Nice repr function of {func.__name__}"
        ...:         @functools.wraps(func)
        ...:         def __call__(self, *args, **kargs):
        ...:             return func(*args, **kargs)
        ...:     return NiceFunction()
    
    In [17]: @nice_repr
        ...: def f(x: float):
        ...:     'Squares a float'
        ...:     return x**2
    
    In [18]: f
    Out[18]: Nice repr function of f
    
    In [19]: f(2)
    Out[19]: 4
    
    In [20]: f?
    Signature:      f()
    Type:           NiceFunction
    String form:    Nice repr function of f
    Docstring:      <no docstring>
    Call docstring: Squares a float
    Tai Sakuma
    @TaiSakuma
    Thank you.
    I cannot pickle f
    I get AttributeError: Can't pickle local object 'nice_repr.<locals>.NiceFunction'
    This is the code I have now. The pickle at L25 doesn't work.
    Tai Sakuma
    @TaiSakuma
    It appears to be impossible.
    Chris Burr
    @chrisburr
    If it's important enough to justify adding a dependency I think cloudpickle can be used
    Tai Sakuma
    @TaiSakuma
    I have been keeping my code running on both batch systems and multiprocessing. I can use cloudpickle for batch systems. But I don't think that I get to choose s serializer for multiprocessing.
    Henry Schreiner
    @henryiii
    I think you can pull the class outside the decorator. I don’t think there’s any reason it has to be inside. Just add an init where you pass in the function you want to store, and make it a member.
    Nevermind, not quite that easy. Will think about it again on Tuesday.
    Henry Schreiner
    @henryiii
    It’s easy without a decorator, though.
    class NiceFunction:
        def __init__(self, function):
            self.func = function
        def __repr__(self):
            return f"Nice repr function of {self.func.__name__}"
        def __call__(self, *args, **kargs):
            return self.func(*args, **kargs)
    def nice_repr(func):
        return NiceFunction(func)
    def f(x: float):
        'Squares a float'
        return x**2
    ff = nice_repr(f)
    f(3)
    9
    f
    <function __main__.f(x: float)>
    import pickle
    pickle.dumps(ff)
    b'\x80\x03c__main__\nNiceFunction\nq\x00)\x81q\x01}q\x02X\x04\x00\x00\x00funcq\x03c__main__\nf\nq\x04sb.'
    Tai Sakuma
    @TaiSakuma
    Yes. That is true. Because this is possible, I thought it should be possible to do with a decorateor, which is just equivallent to do f = nice_repr(f).
    But that seems to be actually impossible.
    Luke Kreczko
    @kreczko

    OK, it might be simple, but I cannot see it atm.

    I am looking at a dictionary of generators and want to unpack the values. The straight-forward way is to unpack the generators first and then match them up against against the keys (AFAIK the order is preserved). However, this uses two for-loops and an additional dictionary.

    Is there an easy way to shorten this?

    generators = dict(
            t1 = range(0, 20, 2),
            t2 = range(10),
            t3 = range(0, 100, 10),
        )
        for g in six.moves.zip(*six.itervalues(generators)):
            data = {}
            for name, value in six.moves.zip(generators, g):
                data[name] = value
            print(data)
        # desired output per iteration:
        # {'t1':0, 't2': 0, 't3': 0}
        # {'t1':2, 't2': 1, 't3': 10}
        # ...
    Luke Kreczko
    @kreczko
    In the real example the generator is quite I/O heavy so I do not want to have the full range at once
    Luke Kreczko
    @kreczko
    thx @benkrikler - having a intermediate function as a generator (i.e. yield {name: value}) does the job without much additional time
    Henry Schreiner
    @henryiii
    You can make it look a little shorter, but the main way to reduce the output would be to have a generator in the middle:
    def iter_dict(gen):
        for g in six.moves.zip(*six.itervalues(gen)):
            data = {name:value for name, value in six.moves.zip(gen, g)}
            yield data
    
    for item in iter_dict(generators):
        print(item)
    Luke Kreczko
    @kreczko
    thx @henryiii !