Just as a benchmark, will multiplication of two huge univariate polynomial suffice?

Or are there any other solid benchmark I can be using?

@certik Could we meet sometime to discuss next tasks in hand?

Sure

Is it fine now?

@Sumith1896 there are plenty of benchmarks for univariate polynomials. However, I would wait until multivariate are implemented and benchmark those first

yes, now is fine.

Yes that's why I didn't implement any

Also the current implementation might not give the expected speed.

we are using

`std::map`

that's why.
Next I'll have to implement

`Polynomial`

right?
For the multivariate one, you can reuse what's already in SymEngine

Right.

The one in

`rings`

?

right

that one is pretty fast

further improvements to it is to use the kronecker substitution and faster integer implementation

I'll look into that.

as well as faster hashmap.

but even if you take it as it is, it will be decent.

Me and @shivamvats will work on the hashmap this weekendd.

Cool.

Could you direct me to SymPy's sparse implementation?

it's in sympy/polys

SymPy has both right?

The benchmark for multivariate in symengine is in benchmarks/expand2b.cpp

I probably was looking at the dense functions in SymPy, will have to look more to get the sparse

I'll have a look a expand2b.cpp

@shivamvats is working on the sparse one in sympy

Cool

Before you write lots of code, why don't we try to speed it up first?

(the expand2b benchmark)

One easy thing to do is to use the integer class from piranha

and just see how the performance improves

So I should modify

`rings`

?
Yes, just temporarily, to see if the speed improves

Essentially, we should first figure out exactly what needs to be done to attain optimal speed

Okay, and what about the hashmap in Piranha?

and only then wrap it up in Polynomial.

The hashmap from Piranha would also provide a speedup, but I expect it might be a bit tricky to reuse

I know the integer from Piranha is just a simple class, that you use instead of mpz_class

Okay, and also we'll have our own implementation of hashmap to take care in future

Yes, in the future.

See here, I've done most of what we just talked about:

Why don't you send a separate PR, just to include Piranha in cmake (code is in my branch)

That can be merged. Then one can easily use Piranha in symengine in other code to test it out.

Okay, I'll do that

I'll look into your work too

The other improvement is the kronecker substitution (packing of the exponents)

And use it in the hashtable, right?

right.

But I have a doubt here

We use Kronecker in multiplication, both polynomials need to known to know the point in which it has to be evaluated.

How do we go about selecting a point given a single polynomial?

How do we go about selecting a point given a single polynomial?

There are two kronecker substitutions I think. One is to select a point and only multiply a large number

that can be used especially for univariate ones.

But the one I want to use here is the packing of exponents trick.

That you pack exponents of the given term into a machine integer

Ohh exponent packing, I think I have some references in my proposal, I look into that

see section 5.1 in Monagan, M., & Pearce, R. (2014). The design of Maple’s sum-of-products and POLY data structures for representing mathematical objects. ACM Communications in Computer Algebra, 48(4), 166–186.

Thanks, I'll have a look

I sent the article to your email

Thanks

I'll use

`mpz_class`

for storing that packed result for now
we can switch to faster options later

Interesting idea, but I don't know if mpz_class would be faster than a vector of machine integers (that's what we use now)

This is to prevent the overflow

say it doesn't fit

then we have to switch to tuple right?

Right

so the polynomial must support both.

I would pack directly to machine long long int.

(use the 64 bit integer)

Okay

This packing code would be very useful, and it is well isolated from the rest.

So it should take in a monomial with coefficient one and return a long long int

Anything else I should consider now?

So the plan would be:

1) send a PR for Piranha in cmake

2) code for packing exponents into machine int64 (send a PR and we'll discuss details on it as we go)

3) try to use Piranha's integer to see how it performs -- we'll write our own class Integer, where in it we switch between int and mpz_class automatically, and we should use it everywhere in SymEngine, instead of mpz_class.

4) I would tackle faster hash table later, but if you are interested, we can do it as well.

We'll compare against Piranha, and if we get similar timings, then we can start putting it into the Polynomial class.

Piranha int will be used as coefficients of the

`Polynomial`

, right? the value of the hashmap
As to 2), yes, it should take a vector of ints (the monomial exponents) and return a single machine integer. Also provide a function that checks if they fit. What happens is that we check if they fit before doing the multiplication. During multiplication, no checking is done (for speed). Essentially, one checks while converting the polynomial between std::vector<int> representation to just a single int representation. And then one has two polynomials in machine int representation, and one checks before multiplying, if they fit.

And the key will be the int64 we are talking about

Yes, that's correct.

We should support polynomials with std::vector<int> keys, as they are now, and then also with packed exponents and be able to convert between them.

So that the user can choose which ones to use.

Yes

Sounds good to me

I think now my coming tasks are clear

I'll send in PRs for the above tasks, we'll discuss there

Once I am done with we can have our next meeting

Thanks @certik

Excellent.

There will be iterations in the PRs

Cool, Thanks

Thanks for your work. This will be very very useful.

@certik @thilinarmtb For some functions, like atanh, atan, I've added more than one algorithms to compute the series( formula based or trick based). I tried to benchmark them, but have not yet been able to make a proper comparison (some work better in certain cases and so on).

This paper says Newton's method works better in certain cases. I think it is better to keep a single algo for now. We can add other algos once we figure the conditions in which they are more efficient. What do you suggest?

I have made the changes you wanted in sympy/sympy#9262

I have also separated the formula based algo from the code of

I have also separated the formula based algo from the code of

`rs_atan`

and `rs_atanh`

. Now, we can test them separately.
Btw, can we use Newton's method in Sympy?