These are chat archives for symengine/symengine

11th
Jun 2015
Ondřej Čertík
@certik
Jun 11 2015 04:20
@shivamvats so there is EX domain (for expression domain)
Here is how to use it:
In [1]: from sympy.polys.ring_series import *

In [2]: R, x = ring("x", EX)

In [3]: p = x**4*(sin(y)+1) +2*x**3 + 3*x + 4

In [4]: rs_pow(p, 2, x, 9)
Out[4]: EX(sin(y)**2 + 2*sin(y) + 1)*x**8 + EX(4*sin(y) + 4)*x**7 + EX(4)*x**6 + EX(6*sin(y) + 6)*x**5 + EX(8*sin(y) + 20)*x**4 + EX(16)*x**3 + EX(9)*x**2 + EX(24)*x + EX(16)

In the expression domain, you get a failure like this one:

In [7]: rs_cos(rs_cos(x, x, 4), x, 4)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-9b319d204133> in <module>()
----> 1 rs_cos(rs_cos(x, x, 4), x, 4)

/home/ondrej/repos/sympy/sympy/polys/ring_series.pyc in rs_cos(p, iv, prec)
    788         zm = ring.zero_monom
    789         c = S(p[zm])
--> 790         if not c.is_real:
    791             raise NotImplementedError
    792         p1 = p - c

AttributeError: 'Expression' object has no attribute 'is_real'

I think the reason is that the coefficients are stored as some kind of an Expression instance.

Ondřej Čertík
@certik
Jun 11 2015 04:30
@shivamvats shouldn't [5] below raise an exception that the answer cannot be stored in a ZZ polynomial? I think that would be the most robust solution.
In [1]: from sympy.polys.ring_series import *

In [2]: R, x = ring("x", QQ)

In [3]: rs_cos(x, x, 5)
Out[3]: 1/24*x**4 - 1/2*x**2 + 1

In [4]: R, x = ring("x", ZZ)

In [5]: rs_cos(x, x, 5)
Out[5]: 1
I should have posted this into the sympy channel, my apologies.
Shivam Vats
@shivamvats
Jun 11 2015 04:33
@certik Great! Thanks a lot!
I didn't know about EX
Ondřej Čertík
@certik
Jun 11 2015 04:54
I found about it by reading the comments in PR 690.
Sumith Kulal
@Sumith1896
Jun 11 2015 11:32
Has anybody compiled Piranha in Release mode, for me the build breaks.
It is fine in Debug mode though.
Sumith Kulal
@Sumith1896
Jun 11 2015 12:27
I got it to build

I have the following queries

sumith@sumith-Lenovo-Z50-70:~/github/piranha/tests$ ./fateman1_perf 
Running 1 test case...
 0.705133s wall, 2.470000s user + 0.000000s system = 2.470000s CPU (350.3%)

What time should be taken from here for comparison?

I am aware that fateman1 is not test to compare expand2b with, just wanted to know how to time Piranha benchmarks?
Shivam Vats
@shivamvats
Jun 11 2015 13:08
@certik I think you mean PR 609. I did see SR or symbolic ring being used, but I thought that is something we have to implement.
Ondřej Čertík
@certik
Jun 11 2015 15:00
@isuruf to simplify c2py, just lookup the SymEngine type in a dictionary. Perhaps we can write a Cython function that takes the C type and converts to Python type for each type. Then c2py just looks it up in dictionary based on the SymEngine type, and calls the function. That will be clean, fast and extensible.
@shivamvats I think Mairo used SR, but Mateusz mentions that he already implemented EX for the same thing.
Isuru Fernando
@isuruf
Jun 11 2015 15:03
I'll do that. I tried to do the same when we set up the type_id system, but was slower than the if, elifs. I'll try again
Ondřej Čertík
@certik
Jun 11 2015 15:03

@Sumith1896 yes, that's correct. The only caveat is that it runs in parallel, so you want to run this on 1 core only, as follows:

certik@redhawk:~/repos/piranha/tests(development)$ ./fateman1_perf 
Running 1 test case...
 0.345855s wall, 2.520000s user + 0.210000s system = 2.730000s CPU (789.3%)

*** No errors detected
Freeing MPFR caches.
Setting shutdown flag.
certik@redhawk:~/repos/piranha/tests(development)$ ./fateman1_perf 4
Running 1 test case...
 0.580437s wall, 1.810000s user + 0.030000s system = 1.840000s CPU (317.0%)

*** No errors detected
Freeing MPFR caches.
Setting shutdown flag.
certik@redhawk:~/repos/piranha/tests(development)$ ./fateman1_perf 1
Running 1 test case...
 1.383419s wall, 1.350000s user + 0.030000s system = 1.380000s CPU (99.8%)

*** No errors detected
Freeing MPFR caches.
Setting shutdown flag.

Use the version with 1.

And then just use the wall time. Run it couple times and use the lowest time.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:04
Okay
Ondřej Čertík
@certik
Jun 11 2015 15:04
@isuruf I doubt that's slower, but you need to make sure there is little Python overhead in the Cython code. Post a PR, I can have a look.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:05
I have implemented expand2c, very bare version though
Ondřej Čertík
@certik
Jun 11 2015 15:05
What did you use to test the speed of it?
@Sumith1896 excellent. Where is the code?
How fast does it run, compared to Piranha?
Sumith Kulal
@Sumith1896
Jun 11 2015 15:06
I didn't compare to Piranha due to lack of similar benchmark there.
Ondřej Čertík
@certik
Jun 11 2015 15:06
The benchmark should be fateman1_perf, isn't it?
Just modify the file fateman1.hpp, you can make it benchmark whatever you need.
(i.e. make sure the power is the same, as well as the symbols)
Sumith Kulal
@Sumith1896
Jun 11 2015 15:07
 f * (f+1)
 where f = (1+x+y+z+t)**20
This is fateman1_perf
I will modify
but there are 3 fateman1 benchmark there
maybe with different cases
Will fateman2 do?
Ondřej Čertík
@certik
Jun 11 2015 15:08
Use the perf version. They all benchmark the same thing, just use different algorithms, perf is the one with packed exponents.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:09
Okay

One last thing

    BOOST_CHECK_EQUAL((fateman1<integer,kronecker_monomial<>>().size()),135751u);

Do you have any idea of the number here?

Ondřej Čertík
@certik
Jun 11 2015 15:09
Just look into fateman2.hpp to see what fateman2 is doing
Yes, the number says how many terms there are. So if you modify the benchmark, the test will fail, but you can ignore it, the timing should not be affected.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:10
Cool
Ondřej Čertík
@certik
Jun 11 2015 15:10
Or you can put there the number of terms from our test.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:10
As of now, we are 20ms(on an average) faster than expand2b
Ondřej Čertík
@certik
Jun 11 2015 15:11
Can you post the timings on your machine for expand2b and expand2c?
So that I can get an idea.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:11
Yes in a moment
Ondřej Čertík
@certik
Jun 11 2015 15:11
Post also expand2 itself.
Each machine has different speed. Make sure you compile symengine in Release mode without the ASSERT checking.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:15
While I recompile and repost the timings, you can have a look the timings till now here
Ondřej Čertík
@certik
Jun 11 2015 15:21
On my machine, expand2 is roughly 819ms and expand2b is 120ms.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:24
@certik Have a look now
Ondřej Čertík
@certik
Jun 11 2015 15:25
It's interesting that on your machine, expand2 is a bit slower than on my machine, but expand2b is faster.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:27
I am yet to use Piranha int though
That will speed it up a bit more, will get back with the results tomorrow
Ondřej Čertík
@certik
Jun 11 2015 15:27
There is also a nice speedup using WITH_TCMALLOC=yes, that @isuruf implemented.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:28
I'll put results with tcmalloc on and off
but won't that speedup shift be in all three benchmarks?
Ondřej Čertík
@certik
Jun 11 2015 15:29
It should be, yes.
Probably mainly in expand2, since it does lots of dynamic allocations.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:30
Yes
Ondřej Čertík
@certik
Jun 11 2015 15:32

On my machine, Piranha, with this patch:

--- a/tests/fateman1.hpp
+++ b/tests/fateman1.hpp
@@ -32,10 +32,10 @@ template <typename Cf,typename Key>
 inline polynomial<Cf,Key> fateman1(unsigned long long factor = 1u)
 {
        typedef polynomial<Cf,Key> p_type;
-       p_type x("x"), y("y"), z("z"), t("t");
-       auto f = x + y + z + t + 1;
+       p_type x("x"), y("y"), z("z"), w("w");
+       auto f = x + y + z + w;
        auto tmp(f);
-       for (auto i = 1; i < 20; ++i) {
+       for (auto i = 1; i < 15; ++i) {
                f *= tmp;
        }
        if (factor > 1u) {
@@ -43,7 +43,7 @@ inline polynomial<Cf,Key> fateman1(unsigned long long factor = 1u)
        }
        {
        boost::timer::auto_cpu_timer t;
-       return f * (f + 1);
+       return f * (f + w);
        }
 }

Takes only 0.013658s

So that's 13.6ms
Sumith Kulal
@Sumith1896
Jun 11 2015 15:33
That's very good speed
Can that be achieved by using Piranha int?
Ondřej Čertík
@certik
Jun 11 2015 15:34
Actually, I ran in parallel by mistake. On 1 core it is 17.5ms
Isuru Fernando
@isuruf
Jun 11 2015 15:34
@certik, what do you meany by current solution in #466?
Ondřej Čertík
@certik
Jun 11 2015 15:34
On your machine, we are now at 70ms.
So we are 4x slower. Yes, piranha::integer should help a lot.
Can you post your code?
I need to look how you do the packing.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:35
Still might be hard to achieve, we will have to think of further optimization.
I'll give a report with piranha::integer, we'll think then.
The code is up in the PR sympy/symengine#470
Ondřej Čertík
@certik
Jun 11 2015 15:37
Thanks. We can also benchmark Piranha with mpz_class.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:38
Shouldn't take long, you have done most of the work
let me get to it
Ondřej Čertík
@certik
Jun 11 2015 15:38

The change is roughly like this:

--- a/tests/fateman1_perf.cpp
+++ b/tests/fateman1_perf.cpp
@@ -30,6 +30,8 @@
 #include "../src/mp_integer.hpp"
 #include "../src/settings.hpp"

+#include <gmpxx.h>
+
 using namespace piranha;

 // Fateman's polynomial multiplication test number 1. Calculate:
@@ -42,5 +44,5 @@ BOOST_AUTO_TEST_CASE(fateman1_test)
        if (boost::unit_test::framework::master_test_suite().argc > 1) {
                settings::set_n_threads(boost::lexical_cast<unsigned>(boost::unit_test::framework::master_test_suite().argv[1u]));
        }
-       BOOST_CHECK_EQUAL((fateman1<integer,kronecker_monomial<>>().size()),135751u);
+       BOOST_CHECK_EQUAL((fateman1<mpz_class,kronecker_monomial<>>().size()),135751u);
 }

It currently fails to compile, but I think I got it working in the past.

@Sumith1896 I looked at #470, it looks good.
The packing function is not being called in the benchmark itself, so we can worry about optimizing it later.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:42
I'll benchmark both said
Ondřej Čertík
@certik
Jun 11 2015 15:46

We are essentially benchmarking the following code:

void poly_mul2(const umap_ull_mpz &A, const umap_ull_mpz &B, umap_ull_mpz &C)
{
    for (auto &a: A) {
        for (auto &b: B) {
            C[a.first + b.first] += a.second*b.second;
        }
    }
}

So anything else is not important. In this code, we use the following data structure for the hash table:

typedef std::unordered_map<unsigned long long, mpz_class> umap_ull_mpz;

So the a.first + b.first is just machine integers addition, and += a.second*b.second is using mpz_class. I think there is further optimization for the +=a*b operation in Piranha, if I remember well. I'll try to get Piranha working with just the mpz_class, then it would be a fair comparison. Then any difference must be caused by the faster hashtable in Piranha.

Sumith Kulal
@Sumith1896
Jun 11 2015 15:47
Cool
You got SymEngine working with Piranha ints before right?
Ondřej Čertík
@certik
Jun 11 2015 15:47
I didn't.
I created a simple integer class and got it working with Piranha though.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:48
Okay, I will try to
Ondřej Čertík
@certik
Jun 11 2015 15:48
I think piranha::integer should be more or less a drop in replacement for mpz_class.
Sumith Kulal
@Sumith1896
Jun 11 2015 15:55
I see this is how P[exp] = piranha::integer{coef.get_str()}; you assign
Is that the only integer constructor?
Ondřej Čertík
@certik
Jun 11 2015 15:56
Yeah, I didn't figure out how to do it otherwise
bluescarni/piranha#9
Sumith Kulal
@Sumith1896
Jun 11 2015 15:58
Okay, @bluescarni drops in, he'll update us hopefully
Till then I'll get to work with that itself
I see you dropped a message. Thanks
Ondřej Čertík
@certik
Jun 11 2015 15:59
No problem.
Sumith Kulal
@Sumith1896
Jun 11 2015 16:00
That's all I have now, I'll keep you updated
Ondřej Čertík
@certik
Jun 11 2015 16:00
Great job @Sumith1896, thanks for all the work. Very good progress.
Does expand2c get the correct amount of terms?
Sumith Kulal
@Sumith1896
Jun 11 2015 16:01
Yes
That's the only test I have to check the correctness
Ondřej Čertík
@certik
Jun 11 2015 16:01
Cool.
The code that we benchmark is now super simple, so it shouldn't be hard to nail the rest of the performance. Once we get there, then we'll think how to polish it up.
Sumith Kulal
@Sumith1896
Jun 11 2015 16:04
Yes, I have couple of things in mind
But not at this stage, too early now
Ondřej Čertík
@certik
Jun 11 2015 16:05
So this line is a bit magic: C[a.first + b.first] += a.second*b.second, because if a.first + b.first is not in the hashtable yet, it initializes the mpz number in there to 0, and then += to it something.
Is that your understanding as well?
Sumith Kulal
@Sumith1896
Jun 11 2015 16:07
For optimization?
Improvement here would be great.
but while packing we needn't allocate equal bits
This part I read in that paper
Will increase coverage
Ondřej Čertík
@certik
Jun 11 2015 16:08
I think the packing is fine, what I am questioning is that it might be doing quite some work with mpz
Sumith Kulal
@Sumith1896
Jun 11 2015 16:09
Ohh wait
Ondřej Čertík
@certik
Jun 11 2015 16:09
I.e. for the first time, it does something like X = 0 + a.second*b.second, the second time it does X += a.second*b.second
Francesco Biscani
@bluescarni
Jun 11 2015 16:10
@certik @Sumith1896 I haven't done anything on the ctor from mpz, but it's not difficult. Just a matter of time, I hope I can have something ready by tomorrow or by the weekend max
Ondřej Čertík
@certik
Jun 11 2015 16:10
@bluescarni thanks!
Sumith Kulal
@Sumith1896
Jun 11 2015 16:10
@certik I get you
Francesco Biscani
@bluescarni
Jun 11 2015 16:10
np! constructor from mpz_t is good? I'd rather not touch the C++ bindings of GMP, as I am not using them anywhere at the moment
and I'd rather not include them just for this
Sumith Kulal
@Sumith1896
Jun 11 2015 16:11
@bluescarni Thanks, that'll do
@certik views?
Ondřej Čertík
@certik
Jun 11 2015 16:12
@bluescarni yes, that's perfectly fine, as we can easily get mpz_t from mpz_class.
Ondřej Čertík
@certik
Jun 11 2015 16:18
@bluescarni I see you posted a blog post about the overloading: http://bluescarni.github.io/overloading-overloaded.html
Francesco Biscani
@bluescarni
Jun 11 2015 16:18
ah yeah, meant to mention that sometimes
don't know if the technique would be applicable for symengine, but it works well for piranha
Ondřej Čertík
@certik
Jun 11 2015 16:22
I like this a lot. I just don't like how long it takes to compile any file that includes Piranha headers.
Francesco Biscani
@bluescarni
Jun 11 2015 16:22
ah yeah, that's a sore note...
clang helps, but it's still inconvenient
but don't take too seriously the time it takes to compile piranha's tests
Ondřej Čertík
@certik
Jun 11 2015 16:23
Why not?
Francesco Biscani
@bluescarni
Jun 11 2015 16:23
it's testing typically for lots of different combination of template arguments
let me get an example
the tests for small_vector use 7 different types for the value type
and 4 different sizes for the static storage
Ondřej Čertík
@certik
Jun 11 2015 16:25
@Sumith1896 read the section "A step further: exploiting the default implementation" in @bluescarni blog. We should be using mpz_addmul() in the above benchmark.
Francesco Biscani
@bluescarni
Jun 11 2015 16:25
this basically means that the compiler has to compile 4*7=28 times the code you actually see
it is as if you compiled many different versions of std::vector<T,Alloc>
with different combinations of T and Alloc
Ondřej Čertík
@certik
Jun 11 2015 16:26
I see. Why would you do that?
Francesco Biscani
@bluescarni
Jun 11 2015 16:27
just to get a lot of coverage with different types
it's quite hard to come up with good testing for generic classes
so I am just carpet bombing :)
Ondřej Čertík
@certik
Jun 11 2015 16:27
I see. How about the fateman1_perf?
Because it is slow to compile as well.
Btw, the multiply_accumulate is useful for SymEngine as well from your blog.
Francesco Biscani
@bluescarni
Jun 11 2015 16:29
yes, but it's not as slow... in the perf benchmarks there's less code to compile but the optimisation level is higher, so it still takes time
Ondřej Čertík
@certik
Jun 11 2015 16:29
Btw, why don't you use march=native in the gcc flags?
I didn't see it among the options in make VERBOSE=1, only -O3, which I thought is not enough.
Francesco Biscani
@bluescarni
Jun 11 2015 16:30
without knowing too much about symengine internals, I would envision that you can just compile all piranha functionality in a handful or even a single .cpp, so hopefully it should not impact too much compilation times
I am just using the default cmake variables for the Release profile
not sure native is supported on all arches, I think it was not on PPC when I tried some years ago
so I just stick with the vanilla options
Ondřej Čertík
@certik
Jun 11 2015 16:31
if I want to add it, where do I add it in your cmake system?
Let me benchmark it.
Francesco Biscani
@bluescarni
Jun 11 2015 16:32
there's a couple of way
1) you set the CMAKE_CXX_FLAGS
2) or you can edit the build system and add PIRANHA_CHECK_ENABLE_CXX_FLAG(-march=native) in this file
Ondřej Čertík
@certik
Jun 11 2015 16:33
I think I just add it to the line CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG in CMakeCache.txt
Francesco Biscani
@bluescarni
Jun 11 2015 16:33
that will test that the flag is supported and enable it if it is
that should work as well
in practice whenever I need to edit flags for particular builds I just use ccmake and add the flag from the GUI
Ondřej Čertík
@certik
Jun 11 2015 16:34
I see. I use -O3 -march=native -ffast-math -funroll-loops for my production runs.
For gfortran, there is now -Ofast, which enables couple more things.
Francesco Biscani
@bluescarni
Jun 11 2015 16:35
right I have experimented in the past with those, but did not notice much of a measurable difference for the type of workloads in poly multiplication
they work great for other things though
Ondřej Čertík
@certik
Jun 11 2015 16:35
There is tons of difference on my machine.
Francesco Biscani
@bluescarni
Jun 11 2015 16:36
really? in the fateman benchmarks?
Ondřej Čertík
@certik
Jun 11 2015 16:36
Before, the ./fateman1_perf 1 took 1.39s, now it takes 1.235s
I used gcc 4.9.2
Francesco Biscani
@bluescarni
Jun 11 2015 16:36
ohh ok that's pretty interesting
Ondřej Čertík
@certik
Jun 11 2015 16:37
The -ffast-math is only for floating point, so I think that doesn't apply here (it makes a huge speedup though), the -funroll-loops usually doesn't speed it up much, but the -march=native is a big one, as it generates avx instructions if you have them.
Otherwise it doesn't.
Francesco Biscani
@bluescarni
Jun 11 2015 16:38
right it makes sense
the integer class uses the int128 type internally, which might benefit from some AVX
Ondřej Čertík
@certik
Jun 11 2015 16:39
Exactly.
I mean it's 11%, that's a lot. We care about every 1% in these benchmarks.
Francesco Biscani
@bluescarni
Jun 11 2015 16:39
yeah... now I am curious to see how it goes in the sparse ones
Ondřej Čertík
@certik
Jun 11 2015 16:41
Btw, can I use your expression patch and create a PR for symengine, under our license? I'll make a commit with your name in it.
Francesco Biscani
@bluescarni
Jun 11 2015 16:41
sure np, go ahead
it was just a quick hack, I guess it needs some work before being acceptable
Ondřej Čertík
@certik
Jun 11 2015 16:42
Regarding the symbolic ring, SymPy has ZZ (ints), QQ (rationals) and then EX (any symbolic expression) which can all be used as polynomial coefficients.
So for stuff like sin(cos(x)), the EX is needed, but for just sin(x), the QQ is needed and for exp(x), the ZZ is enough.
Francesco Biscani
@bluescarni
Jun 11 2015 16:43
right makes sense
Ondřej Čertík
@certik
Jun 11 2015 16:43
I just don't know any other way than to have all three and just switch between them, if we want good performance.
Probably allowing the user to decide on this.
Francesco Biscani
@bluescarni
Jun 11 2015 16:44
yes I agree.. for the switching it might make sense to put them in some kind of logical hierarchy? then the algorithm might "ask" the ZZ if it has the necessary features and move progressively up in the hierarchy until it finds a type which works for the task at hand?
Ondřej Čertík
@certik
Jun 11 2015 16:45
Exactly. Something like the exponent packing (Kronecker) --- you just ask it, and it it can't hold the exponents, you use std::vector<int> or something.
I think the overhead of this would be small, and the benefits big.
and the user can always say --- use QQ or EX from the beginning, then we can skip the features checking.
Francesco Biscani
@bluescarni
Jun 11 2015 16:46
yeah probably making it manual is a better idea
just fail hard, loudly and early if you can't do the computation
Ondřej Čertík
@certik
Jun 11 2015 16:47
Also, if you already start with ZZ, and then later realize you need QQ, the conversion would be expensive (or maybe not), so the user should decide what he wants I think.
So as you said, I think rather than some automatic switching, it should just laudly fail.
And the user can either explicitly convert from ZZ to QQ, or start with QQ, depending on the problem.
So I am thinking that you start with the EX expression, like
e = (a+b+c)^20
then you convert to a polynomial, and you tell it what ring you want.
and it will just fail if it can't do it.
Francesco Biscani
@bluescarni
Jun 11 2015 16:50
yep
Ondřej Čertík
@certik
Jun 11 2015 16:50
and one (explicit) option could be "automatic", in which case it will use the fastest ring that holds it, in exchange for some overhead to determine it.
Francesco Biscani
@bluescarni
Jun 11 2015 16:50
I am doing something similar in the integration routines in Piranha
I think integration is a good example because it's not as automatic as differentiation
Ondřej Čertík
@certik
Jun 11 2015 16:51
What do you do for integration?
Francesco Biscani
@bluescarni
Jun 11 2015 16:51
so it depends a lot on the types of the expressions you are trying to integrate
it depends on the series types.. for polynomial it's rather easy
except that it won't work when you integrate x1x^-1
Ondřej Čertík
@certik
Jun 11 2015 16:52
Right.
Francesco Biscani
@bluescarni
Jun 11 2015 16:52
for poisson series, you have objects of the type:
cfcos(x+y+x)cf \cdot \cos \left( x + y +x \right)
cfcos(x+y+x)cf \cdot \cos \left( x + y +x \right)
where cfcf is some arbitrary type
so you have different strategies depending on the type of cfcf
for instance
if cfcf is a polynomial
then you can often integrate by parts
Ondřej Čertík
@certik
Jun 11 2015 16:53
(you can edit what you post if you make a typo)
Francesco Biscani
@bluescarni
Jun 11 2015 16:53
as long as the degree of the integration variable is non-negative
ah cheers
but if cfcf is something else, it will perform the integration only if its derivative is null
Ondřej Čertík
@certik
Jun 11 2015 16:54
Right.
So how do you decide what data structure / type the answer is?
Francesco Biscani
@bluescarni
Jun 11 2015 16:55
I have some ugly metaprogramming (hidden from the user though) that takes all decisions at compile time
let me see
Ondřej Čertík
@certik
Jun 11 2015 16:56
So you decide at compile time?
Ondřej Čertík
@certik
Jun 11 2015 16:56
But it depends on the derivative being equal to 0, which can only be done at runtime, cannot it?
Francesco Biscani
@bluescarni
Jun 11 2015 16:56
yes
yes but at compile time I can know if the coefficient type is a polynomial or not
so I go with integration by parts
Ondřej Čertík
@certik
Jun 11 2015 16:57
I see.
Francesco Biscani
@bluescarni
Jun 11 2015 16:57
otherwise it's a hail mary and hope the derivative is null
Ondřej Čertík
@certik
Jun 11 2015 16:57
And if it is not?
Francesco Biscani
@bluescarni
Jun 11 2015 16:57
it raises a runtime error
Ondřej Čertík
@certik
Jun 11 2015 16:58
I see.
Francesco Biscani
@bluescarni
Jun 11 2015 16:58
of course you could have other strategies for other coefficient types
it's highly dependent on the expression for integration
need to go for a while, will be back later
Ondřej Čertík
@certik
Jun 11 2015 17:00
Thanks for all the info.
Francesco Biscani
@bluescarni
Jun 11 2015 17:00
np, it's interesting to talk about this
Sumith Kulal
@Sumith1896
Jun 11 2015 17:00
I'll give a read to the blog post