These are chat archives for symengine/symengine

1st
Jan 2016
Isuru Fernando
@isuruf
Jan 01 2016 03:36
@rwst, can you check the following diff and get the timings for series_expansion_sincos with and without the diff?
diff --git a/symengine/expansion.cpp b/symengine/expansion.cpp
index 82d6fa8..ea631b3 100644
--- a/symengine/expansion.cpp
+++ b/symengine/expansion.cpp
@@ -591,8 +591,18 @@ umap_short_basic series(const RCP<const Basic> &ex, const RCP<const Symbol> &var
     if (is_a<Symbol>(*ex))
         return {{0, ex}};

-    pp_t::set_auto_truncate_degree(prec);
-    return pp2vec(_series(ex, var, prec), prec);
+    //pp_t::set_auto_truncate_degree(prec);
+    //return pp2vec(_series(ex, var, prec), prec);
+    umap_short_basic map;
+    RCP<const Basic> d = ex;
+    RCP<const Integer> fac = one;
+    map[0] = d->subs({{var, zero}});
+    for (unsigned i = 1; i < prec; i++) {
+        d = d->diff(var);
+        fac = fac->mulint(*integer(i));
+        map[i] = div(d->subs({{var, zero}}), fac);
+    }
+    return map;
 #else
I'm getting 11 ms vs 778 ms for with and without diff
Ralf Stephan
@rwst
Jan 01 2016 06:48
@isuruf That would be a surprise. First however I note that the expansion test file is not run with make test which happened before. Maybe a side effect of the file removal problems earlier?
I get identical times for the benchmark. That's strange as well.
Ralf Stephan
@rwst
Jan 01 2016 07:04
In both cases 740-745 ms.
Isuru Fernando
@isuruf
Jan 01 2016 07:05
In both cases? Then, there must be something I'm doing wrong.
Ralf Stephan
@rwst
Jan 01 2016 07:05
How can it not run the test with make test in the master branch? Puzzle.
Isuru Fernando
@isuruf
Jan 01 2016 07:06
make test doesn't work right?
I mean with cmake
Ralf Stephan
@rwst
Jan 01 2016 07:06
Maybe something wrong here too, what's the probability for the same number with different algorithms?
Isuru Fernando
@isuruf
Jan 01 2016 07:06
Btw, check out the following,
def series(fun, x, p):
    t = fun.subs({x:0})
    d = fun
    fac = 1
    for i in range(1, p):
        fac *= i
        d = d.diff(x)
        t += x**i/fac*d.subs({x:0})
    return t
x = var("x")
%time e=series(sin(x)*cos(x), x, 1000)
%time e=(sin(x)*cos(x)).series(x, 1000)
With sage-7.0.beta1, I get
CPU times: user 104 ms, sys: 0 ns, total: 104 ms
Wall time: 101 ms
CPU times: user 9.16 s, sys: 32 ms, total: 9.19 s
Wall time: 9.19 s
@rwst, differentiating and getting the coefficients is O(n*m) where m is the time to do a differentiation and subs whereas with power series method it is O(n*n)
Time to do a diff is surprisingly lower
Ralf Stephan
@rwst
Jan 01 2016 07:13
make test works but the expansion test is not run. It is built however.
Isuru Fernando
@isuruf
Jan 01 2016 07:14

Maybe something wrong here too, what's the probability for the same number with different algorithms?

Now I get what you are saying. Yes, it's very strange for to algorithms give the same number

Maybe because series_expansion_sincos is not a test, but a benchmark?
Ralf Stephan
@rwst
Jan 01 2016 07:22
No, I meant the test under test/basic. It has nothing to do with the algorithm/benchmark issue. Happens also with a fresh clone so must be a bug. Also, is it intended that in a new clone cmake does not look for piranha? Sorry to open several questions at once.
Ralf Stephan
@rwst
Jan 01 2016 07:31
@isuruf Ok I confirm your numbers: 12ms vs 745ms --- it was user error
That's probably most visible with multipications.
There are two add_test with the same name test_series
Ralf Stephan
@rwst
Jan 01 2016 07:32
Argh.
Isuru Fernando
@isuruf
Jan 01 2016 07:32
piranha is optional, so a new clone doesn't look for it. You have to do cmake -DWITH_PIRANHA=yes
Ralf Stephan
@rwst
Jan 01 2016 07:33
OK. So what do we do. Check for muls and do the diff if so, else polynomials?
Isuru Fernando
@isuruf
Jan 01 2016 07:34
Have two algorithms, taylor_series and power_series and from python you can specify the algorithm?
Sage has .series and .taylor
But .taylor seems to be slower as there needs to be conversion to maxima
Ralf Stephan
@rwst
Jan 01 2016 07:35
People still complain that series is slow. Also the plan in Sage is to unify both.
Isuru Fernando
@isuruf
Jan 01 2016 07:39
Ah, then I guess we should check for mul and do the diff
Ralf Stephan
@rwst
Jan 01 2016 07:40
Are there expressions having mul where using diff will be slower? I think that the speedup (or rather slowdown) is so big that it almost always will be faster.
And it matters more with long series where the speedup is even greater.
This can be easily implemented in Pynac too. With attribution to you, of course.
Isuru Fernando
@isuruf
Jan 01 2016 07:43
I'll check several expressions and see whether there are expressions like that. Also for diff, we have to check whether the result is a taylor series or not
Maybe a check whether the expression at x=0 is a pole before switching to diff
Isuru Fernando
@isuruf
Jan 01 2016 07:52
It's good to have it in Pynac too
Isuru Fernando
@isuruf
Jan 01 2016 10:05
sin(x)/(1+x) is slow
Differentiating the expression leads to bigger and bigger expressions (number of terms) and slower than polynomials
Ralf Stephan
@rwst
Jan 01 2016 10:11
Multiplication of sin with (1+x)^-1 should not be slower than sin*cos
So diff only if there is no division.
Isuru Fernando
@isuruf
Jan 01 2016 10:48

It's difficult to figure out what should happen, sin(x)*log(1+x).
Time taken to differentiate depends on how many terms there are in the output.
For eg: compare

In [29]:  (sin(x)*cos(x)).diff(x, 30)
Out[29]:  -1073741824*cos(x)*sin(x)

and

In [32]: (sin(x)*log(1+x)).diff(x, 30)
Out[32]:  -log(x + 1)*sin(x) + 30*cos(x)/(x + 1) - 435*sin(x)/(x + 1)^2 - 8120*cos(x)/(x + 1)^3 + 164430*sin(x)/(x + 1)^4 + 3420144*cos(x)/(x + 1)^5 - 71253000*sin(x)/(x + 1)^6 - 1465776000*cos(x)/(x + 1)^7 + 29498742000*sin(x)/(x + 1)^8 + 576864288000*cos(x)/(x + 1)^9 - 10902735043200*sin(x)/(x + 1)^10 - 198231546240000*cos(x)/(x + 1)^11 + 3452532763680000*sin(x)/(x + 1)^12 + 57365159765760000*cos(x)/(x + 1)^13 - 905550022016640000*sin(x)/(x + 1)^14 - 13522880328781824000*cos(x)/(x + 1)^15 + 190165504623494400000*sin(x)/(x + 1)^16 + 2505710178568396800000*cos(x)/(x + 1)^17 - 30764552747978649600000*sin(x)/(x + 1)^18 - 349744389134915174400000*cos(x)/(x + 1)^19 + 3654828866459863572480000*sin(x)/(x + 1)^20 + 34807893966284414976000000*cos(x)/(x + 1)^21 - 299031452710352474112000000*sin(x)/(x + 1)^22 - 2288240681609653714944000000*cos(x)/(x + 1)^23 + 15350281239131427004416000000*sin(x)/(x + 1)^24 + 88417619937397019545436160000*cos(x)/(x + 1)^25 - 425084711237485670891520000000*sin(x)/(x + 1)^26 - 1637363332174018880471040000000*cos(x)/(x + 1)^27 + 4736658210931983189934080000000*sin(x)/(x + 1)^28 + 9146650338351415815045120000000*cos(x)/(x + 1)^29 - 8841761993739701954543616000000*sin(x)/(x + 1)^30
Ralf Stephan
@rwst
Jan 01 2016 13:30
So we're chasing ghosts because we fell for (sin*cos)''''=4*sin*cos?
I mean (sin*cos)''''=16*sin*cos
Sumith Kulal
@Sumith1896
Jan 01 2016 14:42
In EvalDouble, what do these lines mean
    void bvisit(const NumberWrapper &x) {
        apply(*(x.eval(53)));
    }
Ralf Stephan
@rwst
Jan 01 2016 15:02
Call applyon evaluation with 53 bit precision?
Sumith Kulal
@Sumith1896
Jan 01 2016 15:03
53 is?
By default, I'm assuming
Ralf Stephan
@rwst
Jan 01 2016 15:04
The same as in Sage, I forget why.
Sumith Kulal
@Sumith1896
Jan 01 2016 15:05
NP :) I was wondering what the magic number was.
Sumith Kulal
@Sumith1896
Jan 01 2016 15:07
Significant precision, cool
Sumith Kulal
@Sumith1896
Jan 01 2016 17:32
Once DiffVisitor is implemented, then all calls become diff(mybasic, symbol) right?
Is there any way to get mybasic->diff(symbol) working?
Or will we have such things only in the Python wrappers?
Isuru Fernando
@isuruf
Jan 01 2016 17:33
Why not have a Basic::diff method ?
Sumith Kulal
@Sumith1896
Jan 01 2016 17:35
Oh
Like mybasic->__str__() does the printing
Isuru Fernando
@isuruf
Jan 01 2016 17:35
yes
Sumith Kulal
@Sumith1896
Jan 01 2016 17:35
Cool
Srajan Garg
@srajangarg
Jan 01 2016 17:36
Then what's the point of the visitor?
we're defining Basic::myfunc() in basic.h , but it's actual definition is given through the visitor?
Why not just put the definition in basic.cpp
isn't the use of Visitor, that we don't have to add arbitrary many function to a class?
Srajan Garg
@srajangarg
Jan 01 2016 17:41
so that's there just one function to interface it with the actual class, but we can 'apply' many functions, without changing the class again and again
Isuru Fernando
@isuruf
Jan 01 2016 17:42
No, the use of it is to not add arbitrary many functions to classes. There's a distinction in a class and classes. With visitor you don't have to add the function to many classes
Srajan Garg
@srajangarg
Jan 01 2016 17:43
Oh wait
Yes
Right thank you
So, basically when we have a function to be made for many different classes (all different), we use a visitor?
Isuru Fernando
@isuruf
Jan 01 2016 17:44
yes
Srajan Garg
@srajangarg
Jan 01 2016 17:52
should I make a new file for the NumDenomVistor?
Isuru Fernando
@isuruf
Jan 01 2016 17:53
Yeah