## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
Andrew Corrigan
@andrewcorrigan
where would I get the actual number of exponents from? would the polynomial containing the monomial give me the right size, as was the case with packed_monomial?
Francesco Biscani
@bluescarni
yes exactly. you can see an example of this in the function that computes the degree of a d_packed_monomial:
https://github.com/bluescarni/obake/blob/master/include/obake/polynomials/d_packed_monomial.hpp#L902
Andrew Corrigan
@andrewcorrigan
got it, thanks!
Francesco Biscani
@bluescarni
no problem! when you have time, I'd be interested to hear how obake is working for you wrt piranha, both in terms of API and in terms of performance.
Andrew Corrigan
@andrewcorrigan
so far with packed_monomial, without overflow, I saw an immediate 2x speed-up
Francesco Biscani
@bluescarni
that's great to hear
let me know if you find weak spots wrt piranha. obake was built to be faster but there could always be instances where it ends up being slower for some reason.
are you normally using polynomials over the rationals?
Andrew Corrigan
@andrewcorrigan
In terms of API, overall it's great and it's wonderful to have such a high-quality, open-source library available. You've helped me with my main struggles switching over. A few minor differences I noticed:
1. in Piranha, I could divide polynomials as long as the denominator was actually just a rational number. In Obake, it seems to require an explicit conversion to a rational number to compile. Maybe that's better than a run-time error, but it was a difference that I noticed.
2. In Piranha, I could pass a string into the constructor, like Polynomial{"x"}, Polynomial{"y"}. In Obake, I think that compiles, but didn't work. Instead, I had to change those to make_polynomials<Polynomial>("x", "y"); I like the make_polynomials function, but would prefer the string-constructor fail to compile rather than working in an unexpected way.
3. I like that in Obake the range-based for-loop to iterate over the monomials and coefficients of a polynomial returns monomial as a monomial type, whereas in Piranha the monomial was still a polynomial type (I thought that was confusing).
Once I get d_packed_monomial I can measure performance for less trivial cases
yes, always over rationals
Francesco Biscani
@bluescarni
Regarding 2, the fact that the string constructor is still there but it does not operate as before is an unfortunate consequence of the fact that you can construct rationals from string. In piranha the poly ctor from string had a special meaning, while in obake the poly ctors other than copy/move simply forward the construction to the coefficient type.
Regarding the use of rationals as polynomial coefficients, obake does not implement yet an optimization that piranha had. piranha used to convert all rational coefficients in a poly multiplication to a common denominator, thus transforming a rational poly multiplication into an integer poly multiplication. This essentially means avoiding a lot of GCD computations which are not needed. obake will also have this optimisation, but it does not at the moment.
Andrew Corrigan
@andrewcorrigan
wow, so it can get even faster?
Francesco Biscani
@bluescarni
it should, but only if the polys are long enough I believe. In my tests I always used rather large polynomials and there the difference was very visible.
it might be possible that for shorter polynomials the difference is not that great, but it's something that needs to be studied/tuned.
Andrew Corrigan
@andrewcorrigan
large is important for me too
Francesco Biscani
@bluescarni
are your rational coefficients large in terms of bit width?
Andrew Corrigan
@andrewcorrigan
I don't know to be honest
how does that relate to digits?
sorry, I'm a bit ignorant of this stuff
Francesco Biscani
@bluescarni
no problem, just take the number of base 10 digits and multiply by ~3.3 (that is log(10)/log(2))
Andrew Corrigan
@andrewcorrigan
is that like if I could store the numerator/denominator using 32-bit / 64-bit / 128-bit integers?
Francesco Biscani
@bluescarni
yes
Andrew Corrigan
@andrewcorrigan
etc.
ok
Francesco Biscani
@bluescarni
I just wanted to get a feeling of whether your numerators/denominators are "small" (i.e., < 2**64) or not
Andrew Corrigan
@andrewcorrigan
I'm checking
yeah, they're small
well, at least at first, we perform many operations, so I imagine they grow quite a bit
Francesco Biscani
@bluescarni
ok, I see. I am pretty sure you'll see a speedup after I implement that optimisation, even though it might not be easy to say by how much exactly.
Andrew Corrigan
@andrewcorrigan
honestly, Piranha was already blazing fast (compared to Sympy), but I just implemented the d_packed_monomial , it's working great, and for a much larger problem it's close to 2x faster
Francesco Biscani
@bluescarni
great!
Andrew Corrigan
@andrewcorrigan
Here's an example of some the errors that -Werror triggers when the TBB headers are included:
[ 69%] Building CXX object /obake/CMakeFiles/obake.dir/src/series.cpp.o
In file included from /obake/src/series.cpp:14:
In file included from /obake/include/obake/series.hpp:41:
In file included from /third_party/tbb/include/tbb/blocked_range.h:20:
/third_party/tbb/include/tbb/tbb_stddef.h:350:16: error: use of old-style cast [-Werror,-Wold-style-cast]
return 0==((uintptr_t)pointer & (alignment-1));
^          ~~~~~~~
/third_party/tbb/include/tbb/tbb_stddef.h:476:33: error: use of old-style cast [-Werror,-Wold-style-cast]
static const size_t value = (size_t)((sizeof(size_t)==sizeof(u)) ? u : ull);
^       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /obake/src/series.cpp:14:
In file included from /obake/include/obake/series.hpp:42:
In file included from /third_party/tbb/include/tbb/parallel_for.h:21:
In file included from /third_party/tbb/include/tbb/tbb_machine.h:235:
In file included from /third_party/tbb/include/tbb/machine/linux_intel64.h:24:
/third_party/tbb/include/tbb/machine/gcc_ia32_common.h:29:12: error: implicit conversion changes signedness: 'uintptr_t' (aka 'unsigned long') to 'intptr_t' (aka 'long') [-Werror,-Wsign-conversion]
return j;
~~~~~~ ^
In file included from /obake/src/series.cpp:14:
In file included from /obake/include/obake/series.hpp:42:
In file included from /third_party/tbb/include/tbb/parallel_for.h:21:
In file included from /third_party/tbb/include/tbb/tbb_machine.h:235:
/third_party/tbb/include/tbb/machine/linux_intel64.h:70:1: error: use of old-style cast [-Werror,-Wold-style-cast]
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:44:49: note: expanded from macro '__TBB_MACHINE_DEFINE_ATOMICS'
: "=a"(result), "=m"(*(volatile T*)ptr)                    \
^            ~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:70:1: error: use of old-style cast [-Werror,-Wold-style-cast]
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:45:62: note: expanded from macro '__TBB_MACHINE_DEFINE_ATOMICS'
: "q"(value), "0"(comparand), "m"(*(volatile T*)ptr)       \
^            ~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:70:1: error: use of old-style cast [-Werror,-Wold-style-cast]
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:54:48: note: expanded from macro '__TBB_MACHINE_DEFINE_ATOMICS'
: "=r"(result),"=m"(*(volatile T*)ptr)                     \
^            ~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:70:1: error: use of old-style cast [-Werror,-Wold-style-cast]
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:55:47: note: expanded from macro '__TBB_MACHINE_DEFINE_ATOMICS'
^            ~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:70:1: error: use of old-style cast [-Werror,-Wold-style-cast]
__TBB_MACHINE_DEFINE_ATOMICS(1,int8_t,"")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/third_party/tbb/include/tbb/machine/linux_intel64.h:64:48: note: expanded from macro '__TBB_MACHINE_DEFINE_ATOMICS'
: "=r"(result),"=m"(*(volatile T*)ptr)                     \
^            ~~~
/third_party/tb
This is using Clang 10 on Linux. Something similar happened with Xcode on OS X 10.14.
Francesco Biscani
@bluescarni
could you run make VERBOSE=1 and check if the TBB headers are included via -I or -isystem?
and what version of TBB is this?
Andrew Corrigan
@andrewcorrigan
The tbb version is the cmake-ified version from https://github.com/wjakob/tbb
Francesco Biscani
@bluescarni

It's not a problem in principle to selectively deactivate those warnings for the TBB headers, but I am a bit baffled by the fact that I cannot reproduce it and by the fact that similar warnings are not being produce by, e.g., Boost or other libraries obake depends on. It makes me think there's something specific in the way TBB is being included in your setup.

It could also be that TBB is being configured/built in a different way and on my system the offending code is bracketed in the dead branch of some #ifdef.

Andrew Corrigan
@andrewcorrigan
it's included via -I btw
Andrew Corrigan
@andrewcorrigan
Obake is great, thanks again. I'm currently trying to compile in VS2019 (/std:c++17 /Zc:__cplusplus). I'm encountering some errors. I see the documentation says that this compiler should be supported, but wasn't sure if it is in practice. Would you be interested in me reporting errors with reproducers to the issue tracker?
Francesco Biscani
@bluescarni

What version of VS2019 is this? There is a couple of CI builds using MSVC 2019 which works:

But in my experience MSVC tends to introduce regressions relatively frequently, so perhaps something changed in the meantime
Andrew Corrigan
@andrewcorrigan
16.7.2
cmake reports the compiler version as: "MSVC 19.27.29110.0".
hopefully one of those versions is meaningful
Francesco Biscani
@bluescarni
ok so in the continuous integration pipeline we test some 19.26.xxx versions, so it's possible that you are experiencing some compiler regression
Andrew Corrigan
@andrewcorrigan
I see, that's unfortunate
Andrew Corrigan
@andrewcorrigan
I understand msvc breaks a lot (I had to implement a bunch of workarounds in my code to get things working). Is there a way for CI to test newer versions as well to at least know where things stand?