These are chat archives for symengine/symengine

23rd
Apr 2016
Shivam Vats
@shivamvats
Apr 23 2016 04:01
@irislq Could you tell me which algorithm you want to know about?
I think it would be nice to first post the benchmarks of expansion using the new class on a few expressions.
Iris Lui
@irislq
Apr 23 2016 04:03
@shivamvats I did right here, I made it just recently as well https://github.com/symengine/symengine/wiki/Series-expansion-benchmarks
Shivam Vats
@shivamvats
Apr 23 2016 04:04
Cool!
From what I can see, most of the optimizations for the elementary functions are already implemented in series.h.
Iris Lui
@irislq
Apr 23 2016 04:06
Like where did you find out how to do this implementation, it was from when we last talked and you referred to this https://github.com/sympy/sympy/blob/master/sympy/polys/ring_series.py#L1910
Shivam Vats
@shivamvats
Apr 23 2016 04:07
Yes, there are many ways you can optimize for complicated expressions (involving fractional exponents or if terms cancel).
For the expression that you have tried, I don't think I have any special optimization in SymPy.
Iris Lui
@irislq
Apr 23 2016 04:13
But what about what was done with minimizing precision?
Shivam Vats
@shivamvats
Apr 23 2016 04:22
The trick I told you about (L1910)helps if one of the series being multiplied has a high minimum power (Say x10 + x13). In those cases you can reduce the precision of the other series. For example (x10 + x13) * (sin(x)). If I ask for a precision of say 20 in the overall series, I need to expand fewer terms for sin.
This message was deleted
I get an order 20 final series with an order 9 sin series.
Iris Lui
@irislq
Apr 23 2016 04:24
Oops, did I delete that last message?
Shivam Vats
@shivamvats
Apr 23 2016 04:24
Nope I did.
In [25]: rs_sin(x,x,20)*(x**10+x**13)
Out[25]: -1/121645100408832000*x**32 + 1/355687428096000*x**30 - 1/121645100408832000*x**29 - 1/1307674368000*x**28 + 1/355687428096000*x**27 + 1/6227020800*x**26 - 1/1307674368000*x**25 - 1/39916800*x**24 + 1/6227020800*x**23 + 1/362880*x**22 - 1/39916800*x**21 - 1/5040*x**20 + 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11

In [26]: rs_trunc(_,x,20)
Out[26]: 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11

In [32]: rs_sin(x,x, 10)*(x**10+x**13)
Out[32]: 1/362880*x**22 - 1/5040*x**20 + 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11

In [33]: rs_trunc(_,x,20)
Out[33]: 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11
So, I need an order 10 sin series.
rs_trunc truncates the series.
What I mean is this trick wont help in your benchmark example.
Iris Lui
@irislq
Apr 23 2016 04:27
Alright, I see
Shivam Vats
@shivamvats
Apr 23 2016 04:28
Btw, is Flint 10x faster than Piranha?
Iris Lui
@irislq
Apr 23 2016 04:28
Yes
Shivam Vats
@shivamvats
Apr 23 2016 04:28
Wow! Nice.
What we can do is try to match the performance of Piranha.
Iris Lui
@irislq
Apr 23 2016 04:29
Sounds like a plan, we'll take a look at that
Shivam Vats
@shivamvats
Apr 23 2016 04:30
@Sumith1896 Do you know about the optimizations in Piranha's series expansion?
@irislq Could you also compare the performance of our and Piranha's polynomial operations?
Iris Lui
@irislq
Apr 23 2016 04:33
Sure
Iris Lui
@irislq
Apr 23 2016 05:02
What I did was create both Piranha's polynomial (piranha::polynomial<piranha::rational, piranha::monomial<short>> p("x"))and UnivariatePolynomial (SymEngine::UnivariatePolynomial::create(x, {0, 1}))
And calculated the time it took for both to do this:
for(int i = 0; i < 50000; i++)
auto ex = mul(p, p);
Piranha: 177ms
UnivariatePolynomial: 44ms
Shivam Vats
@shivamvats
Apr 23 2016 06:34
What I suggest to do is to create a large polynomial and then check the timings for multiplications and addition. Such a small polynomial might have a fixed overhead which may give a misleading picture.
Sumith Kulal
@Sumith1896
Apr 23 2016 06:36
@shivamvats I'm not aware of this