@irislq Could you tell me which algorithm you want to know about?

I think it would be nice to first post the benchmarks of expansion using the new class on a few expressions.

I think it would be nice to first post the benchmarks of expansion using the new class on a few expressions.

@shivamvats I did right here, I made it just recently as well https://github.com/symengine/symengine/wiki/Series-expansion-benchmarks

Cool!

From what I can see, most of the optimizations for the elementary functions are already implemented in

`series.h`

.
Like where did you find out how to do this implementation, it was from when we last talked and you referred to this https://github.com/sympy/sympy/blob/master/sympy/polys/ring_series.py#L1910

Yes, there are many ways you can optimize for complicated expressions (involving fractional exponents or if terms cancel).

For the expression that you have tried, I don't think I have any special optimization in SymPy.

But what about what was done with minimizing precision?

The trick I told you about (L1910)helps if one of the series being multiplied has a high minimum power (Say x**10 + x**13). In those cases you can reduce the precision of the other series. For example (x**10 + x**13) * (sin(x)). If I ask for a precision of say 20 in the overall series, I need to expand fewer terms for sin.

I get an order 20 final series with an order 9 sin series.

Oops, did I delete that last message?

Nope I did.

```
In [25]: rs_sin(x,x,20)*(x**10+x**13)
Out[25]: -1/121645100408832000*x**32 + 1/355687428096000*x**30 - 1/121645100408832000*x**29 - 1/1307674368000*x**28 + 1/355687428096000*x**27 + 1/6227020800*x**26 - 1/1307674368000*x**25 - 1/39916800*x**24 + 1/6227020800*x**23 + 1/362880*x**22 - 1/39916800*x**21 - 1/5040*x**20 + 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11
In [26]: rs_trunc(_,x,20)
Out[26]: 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11
In [32]: rs_sin(x,x, 10)*(x**10+x**13)
Out[32]: 1/362880*x**22 - 1/5040*x**20 + 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11
In [33]: rs_trunc(_,x,20)
Out[33]: 1/362880*x**19 + 1/120*x**18 - 1/5040*x**17 - 1/6*x**16 + 1/120*x**15 + x**14 - 1/6*x**13 + x**11
```

So, I need an order 10 sin series.

`rs_trunc`

truncates the series.
What I mean is this trick wont help in your benchmark example.

Alright, I see

Btw, is Flint 10x faster than Piranha?

Yes

Wow! Nice.

What we can do is try to match the performance of Piranha.

Sounds like a plan, we'll take a look at that

@Sumith1896 Do you know about the optimizations in Piranha's series expansion?

@irislq Could you also compare the performance of our and Piranha's polynomial operations?

Sure

What I did was create both Piranha's polynomial (piranha::polynomial<piranha::rational, piranha::monomial<short>> p("x"))and UnivariatePolynomial (SymEngine::UnivariatePolynomial::create(x, {0, 1}))

And calculated the time it took for both to do this:

for(int i = 0; i < 50000; i++)

auto ex = mul(p, p);

for(int i = 0; i < 50000; i++)

auto ex = mul(p, p);

Piranha: 177ms

UnivariatePolynomial: 44ms

UnivariatePolynomial: 44ms

What I suggest to do is to create a large polynomial and then check the timings for multiplications and addition. Such a small polynomial might have a fixed overhead which may give a misleading picture.

@shivamvats I'm not aware of this