Discussion of Python in High Energy Physics https://hepsoftwarefoundation.org/activities/pyhep.html
@HDembinski As I've been trying to figure out the issues that pyhf is having with iminuit this weekend I've run into a problem where installing iminuit in a Unbuntu 18.04 Docker image with Python 3.6.8 installed from source on it fails. I have a short Gist that describes what's going on, and if you have any thoughts on what to think about with regards to what is going wrong that would be great:
https://gist.github.com/matthewfeickert/284cbddc4a60aca2dcda29c354189b35
CXX_VERSION="$(which gcc)"
for CXX_VERSION="$(which g++)"
and tried to rebuild the Docker image, but this just results in CPython complaining loudly and then failing during the build. So unless I'm doing something stupid I guess that needs to be gcc
.
CXX
if not given, so it really seems like it should be g++. Never tried passing it explicitly either.
From the looks of it @henryiii's and @daritter 's suggestion of removing the with-cxx-main
flag seems to do the trick. I rebuilt and was able to build through iminuit
in the container. :+1: I'll need to do some experimentation with the configure options, but I found the following from the old SVN Python 2.7 trunk very helpful
--with-cxx-main=<compiler>: If you plan to use C++ extension modules, then -- on some platforms -- you need to compile python's main() function with the C++ compiler. With this option, make will use <compiler> to compile main() and to link the python executable. It is likely that the resulting executable depends on the C++ runtime library of <compiler>. (The default is --without-cxx-main.)
There are platforms that do not require you to build Python with a C++ compiler in order to use C++ extension modules. E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such a platform. We recommend that you configure Python --without-cxx-main on those platforms because a mismatch between the C++ compiler version used to build Python and to build a C++ extension module is likely to cause a crash at runtime.
The Python installation also stores the variable CXX that determines, e.g., the C++ compiler distutils calls by default to build C++ extensions. If you set CXX on the configure command line to any string of non-zero length, then configure won't change CXX. If you do not preset CXX but pass --with-cxx-main=<compiler>, then configure sets CXX=<compiler>. In all other cases, configure looks for a C++ compiler by some common names (c++, g++, gcc, CC, cxx, cc++, cl) and sets CXX to the first compiler it finds. If it does not find any C++ compiler, then it sets CXX="".
Similarly, if you want to change the command used to link the python executable, then set LINKCC on the configure command line.
Kinda unfortunate that I can't find that level of detail in modern Python, but maybe I'm not searching hard enough through CPython's GitHub
@henryiii This information on https://bugs.python.org/issue23644 is also very nice. Thanks for taking the time to go find it!
I don't think that CPython can be built by g++. - STINNER Victor
This was exactly why I originally had it set to which gcc
, but it seems that not explicitly setting compiler flags is the way to go.
@daritter @henryiii Thanks to your help the problem is now resolved: matthewfeickert/Docker-Python3-Ubuntu#3
@HDembinski Please ignore my ping as the issue no longer exists.
Contrary to what you might expect, llvmlite does not use any LLVM shared libraries that may be present on the system, or in the conda environment. The parts of LLVM required by llvmlite are statically linked at build time. As a result, installing llvmlite from a binary package does not also require the end user to install LLVM. (For more details on the reasoning behind this, see: Why Static Linking to LLVM?)
keras
in conda-forge, do conda search keras[channel=conda-forge]
repr
is implemented as type(func).__repr__(func)
with func
being your function. Therefore, some kind of wrapping is necessary. Either with a function wrapper as desribed in your link or, depending on your context you may even need to change more, by creating a class instead of a function and implement the __call__
.
Did you use the functools.wraps
? Should be used anyway for any wrapping, like:
def decorator(func):
@functools.wraps(func)
def new_func():
print(f"Wrapped {func}")
return new_func
the problem arises from name clash, e.g. your wrapped function has the same name as the unwrapped. functools.wraps
solves that. Explanation e.g. here
__call__()
).
In [11]: def nice_repr(func):
...: class NiceFunction:
...: def __repr__(self):
...: return f"Nice repr function of {func.__name__}"
...: @functools.wraps(func)
...: def __call__(self, *args, **kargs):
...: return func(*args, **kargs)
...: return NiceFunction()
In [17]: @nice_repr
...: def f(x: float):
...: 'Squares a float'
...: return x**2
In [18]: f
Out[18]: Nice repr function of f
In [19]: f(2)
Out[19]: 4
In [20]: f?
Signature: f()
Type: NiceFunction
String form: Nice repr function of f
Docstring: <no docstring>
Call docstring: Squares a float
f
AttributeError: Can't pickle local object 'nice_repr.<locals>.NiceFunction'
class NiceFunction:
def __init__(self, function):
self.func = function
def __repr__(self):
return f"Nice repr function of {self.func.__name__}"
def __call__(self, *args, **kargs):
return self.func(*args, **kargs)
def nice_repr(func):
return NiceFunction(func)
def f(x: float):
'Squares a float'
return x**2
ff = nice_repr(f)
f(3)
9
f
<function __main__.f(x: float)>
import pickle
pickle.dumps(ff)
b'\x80\x03c__main__\nNiceFunction\nq\x00)\x81q\x01}q\x02X\x04\x00\x00\x00funcq\x03c__main__\nf\nq\x04sb.'