Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • 21:08
    pre-commit-ci[bot] synchronize #3951
  • 21:08

    pre-commit-ci[bot] on pre-commit-ci-update-config

    [pre-commit.ci] pre-commit auto… (compare)

  • 21:08
    pre-commit-ci[bot] edited #3951
  • 16:27
    github-actions[bot] labeled #3966
  • 16:26

    Skylion007 on master

    (perf): use a rvalue cast in fu… (compare)

  • 16:26
    Skylion007 closed #3966
  • 06:08
    rwgk commented #1895
  • 05:43
    rwgk commented #1895
  • 05:41
    rwgk closed #3964
  • 05:41
    rwgk commented #3964
  • 05:40
    rwgk commented #3964
  • 05:11
    rwgk synchronize #3964
  • May 22 17:23
    royjacobson commented #2445
  • May 22 17:03
    Skylion007 commented #2445
  • May 22 17:02
    Skylion007 commented #2445
  • May 22 17:02
    Skylion007 commented #2445
  • May 22 05:10
    Skylion007 commented on 3b1a823
  • May 22 04:20
    rwgk synchronize #3964
  • May 22 03:48
    rwgk synchronize #3964
  • May 22 00:44
    rwgk synchronize #3964
Hi, ASAN reports that (https://github.com/pybind/pybind11/blob/6493f496e30c80f004772c906370c8f4db94b6ec/include/pybind11/detail/internals.h#L504) allocated ptr variable in internals.h:504 is not deallocated properly when I call finalize_interpreter few times
https://bpa.st/SKVQ here is my code
Hello, I have a pybind11 wrapper around a C++ code. The pybind code returns directly a (raw) pointer, thus :
fiber = fibergroup[0]
del fiber
fiber = fibergroup[0]
Returns a segfault, since I have deleted the object. I'm guessing the only solution is to call a copy constructor (to avoid risking destroying my object), or a wrapper around the pointer so that I just destroy the wrapper, not the pointer, when the variable is deleted.
Is it correct, or is there a way to pass a read-only variable that cannot be deleted, without copying ?
The answer was to use py::return_value_policy::reference
Sorry !
Nicolas Sommer

Hi, is there a way to convert exceptions from C++ to Python without the throw/catch mechanism? For instance, I'd like to add bindings for a C++ function that returns an std::exceptionso that it returns a RuntimeError in Python.
For instance, with a pybind function defined like this: m.def("get_exception", [](){return std::runtime_error("foo");});
If I try to call this function from Python, I run into the following error:

TypeError: Unable to convert function return value to a Python type! The signature was
    () -> std::exception

Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
<pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
conversions are optional and require extra headers to be included
when compiling your pybind11 module.

I don't think there any header that solves this issue though.
One workaround is to throw the exception from C++ and catch it in Python, but is it possible to avoid that?

2 replies
I'm curious to know if it would be possible to embed the interpreter in such a way that the end user could select the python used. Consider a situation with multiple venvs based on different python versions - 3.8, 3.9 3.10. When running the application the user can choose the venv. Having played with embedding it looks like the only way to do this would be to have multiple so/dlls each setup to support a different python version and then load the correct one based on the venv chosen. Is that correct?
pybind11 class cannot apply to python copy.copy method, it throws a pickle serialization exception.
is there any solution ?
Artem Erofeev
Hi folks! Anyone has experience marrying pybind11 with PySide2? I can't manage to bind an object with QObject inheritance. Need some help

Hi, I want to use mpdecimal library (C++ version decimal.Decimal) in C++ code, for example:

struct MyClass {
  decimal::Decimal a;
  decimal::Decimal b;

When I export MyClass to python module using pybind11, how should I convert a and b ? will pybind11 convert it to python decimal.Decimal type ?


If I add 2 methods to MyClass and use def_property:

struct MyClass {
  decimal::Decimal a;
  decimal::Decimal b;

  py::object get_a_as_pydecimal() const;
  void set_a_as_pydecimal(py::object obj);

PYBIND11_MODULE(my_module, m) {
  py::class<MyClass>(m, "MyClass")
  .def_property("a", &MyClass::get_a_as_pydecimal, &MyClass::set_a_as_pydecimal)

I can transfer a with a -> std::string -> python decimal.Decimal, but I don't want the extra string transform, is there a way to transform C++ decimal::Decimal directly to python decimal.Decimal ?


Hello, I have a class similar to the following

class Test {

I've written the following binding

    py::class_<Test>(m, "Test")

but this seems to be equivalent to Test test();

What I want instead is to bind the constructor so that it is called like so Test test; (without the parentheses), but I cannot remove the default Test() constructor from the C++ source. Is there a way to do that?

Mikaël Capelle
@NaderAlAwar You need to explain what you actually want to happen. Test test(); declares a function test, this does not declare a variable. You'd have Test test{}; vs. Test test; but these are equivalent since Test has a default constructor.
@Holt59 My bad, I wasn't clear enough. I want to call the test constructor from Python, but it seemed to me that calling the constructor that I bound is equivalent to calling Test test();, when I want to call Test test{}; or Test test;. Does that make sense?
Mikaël Capelle
@NaderAlAwar If it was equivalent to Test test();, you'd have a compile time error because Test test(); does not construct anything. Your code actually calls the default constructor of Test, which is called with both Test test{}; and Test test; (and not with Test test();).
Tim Nonet
Can anyone assist me or point me towards some documentation on why when I run on Github Actions, my pytest sweet raises pybind11::error_already_set: SystemError:. I am unable to reproduce locally.
I am looking for a way to write a type caster for a boost::container::vector. Although an example for a boost::optional is given in the documentation I couldn't get the former to work. Any hints?
As I can't afford a copy of the vector I also specified PYBIND11_MAKE_OPAQUE(boost::container::vector<SomeStruct>)
brice rebsamen

I have a pybind around a function that returns a std::array<const char*, 2>. pybind seems to think that it's a List[str[2]] because the associated mypy test complains that

__init__.pyi:29: error: "str" expects no type arguments, but 1 given  [type-arg]
    def extended_supported_modes() -> typing.List[str[2]]:

is this a bug? is there something I can do to workaround this issue?

16 replies
brice rebsamen

my next issue is more complicated. I have a cpp class that represent a generic time series, and some concrete instantiations for a Pose type to represent a trajectory. I'm having issues with mypy and the bindings, on the __iter__ method.

TimeSeries is templated, and under the hood it's a deque<T>.
I also have generic code to get some bindings around it, a bit like this:

template<typename TimeSeriesT, typename ElementT>
py::class_<TimeSeriesT> define_time_series(const std::string& python_class_name, py::module& m) {
  py::class_<TimeSeriesT> time_series(m, python_class_name.c_str());

         [](const TimeSeriesT& traj) {
             return py::make_iterator(traj.begin(), traj.end());
         py::keep_alive<0, 1>());

  return time_series;

Then I create a binding for a concrete type Trajectory = TimeSeries<Pose>, a bit like this:

PYBIND11_MODULE("trajectory_py", m) {
  auto pose_py_module = m.import("vehicle.localization.types.pose_py");
  m.add_object("Pose", pose_py_module.attr("Pose"));
  auto trajectory =
    define_preintegrated_time_series<TimeSeries, Pose>("Trajectory", m);

It works, but mypy complains with:

error: Missing type
parameters for generic type "Iterator"  [type-arg]
        def __iter__(self) -> typing.Iterator: ...

the type here should be typing.Iterator[Pose]. So is this a bug with pybind/stubgen? or is it something in my code and if so what should I do instead?

3 replies
Taiyu Guo
is it possible to build manylinux wheel? It seems like the whl generated is specific to the python version for which I ran python setup.py bdist_wheel (which calls cmake etc)
2 replies
https://stackoverflow.com/questions/58169758/building-lib-with-pybind11-linking-other-shared-lib makes mention of RPATH, but I have no experience with that. Could someone shed some light on this?
Henry Schreiner
My recent slides from PyCon might be helpful: https://www.slideshare.net/HenrySchreiner/pycon2022-building-python-extensions (talk recording will take a few weeks to publish, I expect)
Jonathan Shimwell
Hi Pybind experts, I'm just working through an adapting the nice minimal pybind example that you have on github. The only variation from the example that I am keen to make for my own project is the inclusion of a hpp file from a conda package. I normally install this package using mamba install -c conda-forge dagmc however this doesn't get the hpp file into the python example project that I've adapted, is there a way to include a hpp file not using a local path but pointing to a hpp file that belongs to a conda package? Specific line that I want to add to the example is here on a git repo
Arash Badie-Modiri

Hi, I was trying to figure out how to do custom metaclasses without breaking everything. The end goal is custom representation for my binding classes, so something like this, but on the C++ side:

class Meta(type):
  def __new__(mcs, name, bases, class_dict):
    class_ = super().__new__(mcs, name, bases, class_dict)
    class_.pretty_name = "Hello I'm class " + name
    return class_
  def __repr__(self):
    return "<"+self.pretty_name+">"

class SomeClass(metaclass=Meta):

print(SomeClass)  # => "<Hello I'm class SomeClass>"

I saw a couple of issues about it (pybind/pybind11#2696 and pybind/pybind11#2977) but apparently without any resolution. My problems so far:

0- Is there a better way to achieve the intended goal?
1- The custom metaclass should (presumably) inherit from the pybind11 default metaclass (py::detail::get_internals().default_metaclass?)
2- The actual classes should indicate the custom metaclass as their... well, metaclass. I guess something like py::class_<SomeClass>(.., .., py:metaclass(other_class_handle))?

Any suggestions on how to bind functions accepting std::unique_ptr as parameters? And why this is so hard to be already implemented?
Wonder how folks are getting away with it because many libs are using std::unique_ptrsemantics.
2 replies
Is there any unsafe type_caster which does the job? I'm ok if the giving up ownership even the object is referenced elsewhere.
Why doesn't pybind11 implements a default type_caster like so : when the reference count is 1, give up ownership safely or return nullptr otherwise.
Dustin Spicuzza
@deadlocklogic check out the smart_holder branch, it has support for disowning unique_ptr among other things
@virtuald Thanks. I've been analyzing this branch and it seems the only solution so far. But it introduces more syntactic changes but it is ok.
Do I need to refactor anything other than what is mentioned in the readme?
Dustin Spicuzza
the readme should be sufficient. we've been using it since January, no complaints
@virtuald Just a question. Is it safe to define all my types with py::classh (even custom ref-counted one or this is an exception?)?
Dustin Spicuzza
I dunno. I think it's supposed to work basically the same as the normal class, but with the extra functionality... so probably works? FWIW, I just use PYBIND11_USE_SMART_HOLDER_AS_DEFAULT and py::class
Matthijs van der Burgh

I have a problem with pybind doing some weird stuff to the return value of the my hash function. I am using PyBind 2.9.1, I have tested with 2.9.2 too, which didn't solve the issue.

Seed: '8386164648621503828' //  Hash after x
Seed: '17979886851897066944' // Hash after adding y
Seed: '14435809606359582927' // Hash after adding z
Vector: '14435809606359582927' // Final value in cpp
Vector(ssize_t): '-4010934467349968689' // static_cast in cpp
Vector(unsigned long): '14435809606359582927' // static_cast in cpp
Vector(long): '-4010934467349968689' // static_cast in cpp
Vector(u_int64_t): '14435809606359582927' // static_cast in cpp
Vector(int64_t): '-4010934467349968689' // static_cast in cpp
Vector(u_int32_t): '1346111695' // static_cast in cpp
Vector(int32_t): '1346111695' // static_cast in cpp
Out[5]: 600751551077419221  // Output in python

My wrapper function is just copied from an example in the repository:


CPP hash function

template <class T>
inline void hash_combine(std::size_t& seed, const T& v)
  std::hash<T> hasher;
  seed ^= hasher(v) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
  std::cout << "Seed: '" << seed << "'" << std::endl;

template<> struct std::hash<KDL::Vector>
    std::size_t operator()(KDL::Vector const& v) const noexcept
        size_t seed = 0;
        KDL::hash_combine(seed, v.x());
        KDL::hash_combine(seed, v.y());
        KDL::hash_combine(seed, v.z());
        std::cout << "Vector: '" << seed << "'" << std::endl;
        std::cout << "Vector(ssize_t): '" << static_cast<ssize_t>(seed) << "'" << std::endl;
        std::cout << "Vector(unsigned long): '" << static_cast<unsigned long>(seed) << "'" << std::endl;
        std::cout << "Vector(long): '" << static_cast<long>(seed) << "'" << std::endl;
        std::cout << "Vector(u_int64_t): '" << static_cast<u_int64_t>(seed) << "'" << std::endl;
        std::cout << "Vector(int64_t): '" << static_cast<int64_t>(seed) << "'" << std::endl;
        std::cout << "Vector(u_int32_t): '" << static_cast<u_int32_t>(seed) << "'" << std::endl;
        std::cout << "Vector(int32_t): '" << static_cast<int32_t>(seed) << "'" << std::endl;
        return seed;
9 replies
Guys what about global variables? Is there any ways to be registered to a module? In my case there are custom types variables (not fundamental/primitives). Barely found anything on this topic other than trying to create a class and attach them to it, which adds another level of indirection. Even thought pure python has no problem of variables in a module.
3 replies
Tom de Geus
I use cls.def_property("foo", &S::foo, &S::set_foo); with const xt::pyarray<double>& foo() const { return m_foo; ]; a reference to a 'NymPy' array held as class member. This issue is now: as long as I assign myvar.foo = np.array(...), set_foo is called. However, when I subindex, myvar.foo[0, 1] = 10 the underlying memory of m_foo is correctly changed, but, set_foo is not called, and the const-ness of the reference to m_foo is violated. This causes problems because my API is such that some operations are performed if m_foo is changed. From C++ access is guaranteed, but somehow in the Python API this is lost.
4 replies
Sylwester Arabas

Hello, we are using pybind11 to interface ... a Fortran code from Matlab :) (the project is here: https://github.com/open-atmos/PyPartMC). Basic things like instantiation of an object (i.e. a C++ class wrapping the Fortran code) that do work when called from Python, stop to work when called from the Matlab Python interface. More precisely with this snippet:

ppmc = py.importlib.import_module('PyPartMC');
ver = char(py.getattr(ppmc, "__version__"))
system(['ldd ' char(py.getattr(ppmc, "__file__"))]); 
GasState = ppmc.GasState;
gas_state = GasState();

we get:

ver =


      linux-vdso.so.1 (0x00007fffe3fe0000)
      libgfortran.so.5 => /usr/local/MATLAB/R2022a/sys/os/glnxa64/libgfortran.so.5 (0x00007f184f42d000)
      libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f184f2ce000)
      libmvec.so.1 => /lib/x86_64-linux-gnu/libmvec.so.1 (0x00007f184f2a2000)
      libgcc_s.so.1 => /usr/local/MATLAB/R2022a/sys/os/glnxa64/libgcc_s.so.1 (0x00007f184f288000)
      libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f184f094000)
      libstdc++.so.6 => /usr/local/MATLAB/R2022a/sys/os/glnxa64/libstdc++.so.6 (0x00007f184eebe000)
      libquadmath.so.0 => /usr/local/MATLAB/R2022a/sys/os/glnxa64/libquadmath.so.0 (0x00007f184ee75000)
      /lib64/ld-linux-x86-64.so.2 (0x00007f184fbc8000)

  Cannot find class 'py.pybind11_builtins.pybind11_type'.

So, importing works OK, reading __version__ also, but instantiation of an object fails. I'm guessing that the "Cannot find class 'py.pybind11_builtins.pybind11_type" (which is not thrown if executing the same code from outside of Matlab) might be related with the MATLAB-shipped shared libraries (as shown above in ldd output), but does anyone here have a hint how to solve it? Thanks

4 replies
Sergei Izmailov
FYI: mypy integrates stub-generation tests for pybind11:
I have a simple C++ class Size, with width and height fields. On python side, I would like to unpack the w and h, like "w, h = Size(1,2)". Is there a simple way to accomplish this in the bindings? I believe I need to support an iterator, but is there a simple way to implement an iterator that returns those two fields?
1 reply
@Holt59: Thanks! I'll try that. I actually tried something a bit similar, but with local std::vector. That, of course, gets freed at the end of the func, leading to garbage values.
1 reply
Roman Steinberg

Hello, everyone! I'm trying to call Python routine from C++. If I call it once, everything is ok (afaik). But I need to do it in the loop. The simple example is here. This code was copied and shortened, so I hope it is able to run (sorry if not), but anyway it should give an idea. On the second loop iteration it fails with

terminate called after throwing an instance of 'pybind11::error_already_set'
  what():  SystemError: ../Objects/structseq.c:398: bad argument to internal function

when py::module_ created. I decided to make static variable

static py::module_* api_module = nullptr;

and it helped. But on the second loop iteration it fails with again

terminate called after throwing an instance of 'pybind11::error_already_set'
  what():  TypeError: 'NoneType' object is not callable

when routine is called from C++ (py::tuple result = api_module.attr("refine_frame")(py_frame);). I think I don't understand some utility things I need to do with pybind11 to manage repeatable python calls of the same function. Can anyone help?
Also I have no memory management with arguments provided and results obtained from Python. Should I do something with it?

6 replies

Hello! I'm trying to understand and fix a problem I'm facing with some bindings. I'm new to Pybind and to C++ smart pointers, so please be kind! I'm trying to learn :(

I have a binding for a C++ function which returns a smart pointer (shared_ptr) of an Eigen matrix (Eigen::Matrix4d).

If I define the binding like this:

.def("feed_monocular_frame", &system::feed_monocular_frame, py::arg("img"), py::arg("timestamp"), py::arg("mask") = cv::Mat{})

Compilation fails, throwing the following exception: static assertion failed: Holder classes are only supported for custom types

Based in what I was able to find here, this happens because "pybind11 does not know what to do with it, because Eigen matrices get converted into np.ndarray and there's no way to hide the std::shared_ptr refcounting in there" (I'm quoting the user YannickJadoul).

So I tried wrapping the binding inside a lambda function:

.def("feed_monocular_frame", [](stella_vslam::system &self, const cv::Mat &img, const double timestamp, const cv::Mat& mask = cv::Mat{}) { auto matrix = *self.feed_monocular_frame(img, timestamp, mask); return matrix;}, py::arg("img"), py::arg("timestamp"), py::arg("mask") = cv::Mat{});

Doing it this way, everything compiles fine. But when I call the method inside a python script, a segmentation fault rises everytime.

I think I'm missing some extra step or doing something wrong. How could I define the binding while being able to return the matrix to Python? Do I need to make a conversion inside the lambda function? Any help is appreciated!

1 reply
did you try to do some nm or readelf on their elf files?