These are chat archives for symengine/symengine

23rd
Jun 2015
Ondřej Čertík
@certik
Jun 23 2015 04:07 UTC
yes
Sumith Kulal
@Sumith1896
Jun 23 2015 14:56 UTC
@bluescarni Any help on my previous query, I'm still stuck
Francesco Biscani
@bluescarni
Jun 23 2015 17:39 UTC
@Sumith1896 I am not sure I understand the question completely: we do have a temp my_pair instance into which we write the exponent during each term multiplication. Or am I misremembering?
Sumith Kulal
@Sumith1896
Jun 23 2015 17:40 UTC
Yes we have
Say the temp my_pair is (3, 6)
And there is (3, 2) in the set
final result should be (3, 8) in the set, right?
Francesco Biscani
@bluescarni
Jun 23 2015 17:42 UTC
yep
then you should compute the destination bucket of my_pair
Sumith Kulal
@Sumith1896
Jun 23 2015 17:42 UTC
Given (3, 6) how do I use _bucket because that element does not exist
Francesco Biscani
@bluescarni
Jun 23 2015 17:43 UTC
let me check a second
are you talking about the first or the second optimisation we discussed?
Sumith Kulal
@Sumith1896
Jun 23 2015 17:44 UTC
First
Francesco Biscani
@bluescarni
Jun 23 2015 17:44 UTC
ok.. so the use of low-level methods
Sumith Kulal
@Sumith1896
Jun 23 2015 17:44 UTC
Yes
Francesco Biscani
@bluescarni
Jun 23 2015 17:45 UTC
so the general procedure would be like this:
Sumith Kulal
@Sumith1896
Jun 23 2015 17:46 UTC
I have a basic procedure implemented
but I am really confused
Francesco Biscani
@bluescarni
Jun 23 2015 17:46 UTC
  • first locate the destination bucket of my_pair with the _bucket() method
  • then call _find() passing the my_pair instance and the output from _bucket()
  • if the result of _find() is end(), then use _unique_insert()
  • otherwise do the multiply accumulate
Sumith Kulal
@Sumith1896
Jun 23 2015 17:48 UTC
Agreed to all the points.
Trouble with the first point, we need to do it through temp.first but the argument type of _bucket is my_pair not my_pair.first
Francesco Biscani
@bluescarni
Jun 23 2015 17:48 UTC
what is confusing you?
can you point to the current code, if it is on github?
probably I am misunderstanding something
right... the _bucket() method takes as input a pair
Sumith Kulal
@Sumith1896
Jun 23 2015 17:50 UTC
So if I pass (3, 6) bucket value of (3, 2) won't be returned.
It will be end()
Yes
Francesco Biscani
@bluescarni
Jun 23 2015 17:50 UTC
it should not return end()
if it does it means there is some inconsistency in the handling of equality or hashing
because hash and equality should be defined only in terms of .first
Sumith Kulal
@Sumith1896
Jun 23 2015 17:51 UTC
So is it valid if I pass (3, 6), will I get close (3, 2) and hence serve our purpose?
Francesco Biscani
@bluescarni
Jun 23 2015 17:51 UTC
the hash of (3,2) should be the same as (3,6)
if you look up (3,6) and (3,2) is in the set, you should receive an iterator to it
Sumith Kulal
@Sumith1896
Jun 23 2015 17:52 UTC
Hence, the method comes down to
    for (auto &a: A) {
        for (auto &b: B) {
            temp.first = a.first + b.first;
            temp.second = a.second * b.second;
            size_t bucket = C._bucket(temp);
            auto it = C._find(temp, bucket);
            if (it == C.end()) {
                C._unique_insert(temp, bucket);
            } else {
                piranha::math::multiply_accumulate(it->second,a.second,b.second);
            }
        }
    }
Nothing is verified here, does it look right at the first looks?
How I pass arguments, etc?
Francesco Biscani
@bluescarni
Jun 23 2015 17:53 UTC
should be ok, apart that you can avoid one operation
temp.second = a.second * b.second; this you should do it inside the if (it == C.end()) {
Sumith Kulal
@Sumith1896
Jun 23 2015 17:54 UTC
Go on
Francesco Biscani
@bluescarni
Jun 23 2015 17:54 UTC
because otherwise in the else you are doing another multiplication
there's another thing too
Sumith Kulal
@Sumith1896
Jun 23 2015 17:54 UTC
Yes, should work, shouldn't it? I don't know why I moved it above in first place
Go on
Francesco Biscani
@bluescarni
Jun 23 2015 17:55 UTC
the low-level methods do not update the size of the table or resize it, you have to take care of it manually
let me check a second
Sumith Kulal
@Sumith1896
Jun 23 2015 17:55 UTC
One second
If I have a rehash done before
That takes care of it, doesn't it?
Francesco Biscani
@bluescarni
Jun 23 2015 17:58 UTC
it should yes
but just generally speaking
it should be something like this:
there's two things here
first we check if the max load factor of the table would be exceeded after the insertion
if that's the case, we rehash to the next available size and recompute the bucket
secondly, at the end of the insertion we update the size (that it, the number of elements stored in the table)
Sumith Kulal
@Sumith1896
Jun 23 2015 18:01 UTC
What does max_load_factor give?
Francesco Biscani
@bluescarni
Jun 23 2015 18:01 UTC
with the _update_size() method
max_load_factor() is hard-coded to one currently
it means that if there are more elements stored than buckets, then we need to make the table bigger
Sumith Kulal
@Sumith1896
Jun 23 2015 18:02 UTC
So removing rehashing to 10,000 on top is a good move?
Taking the resizing into consideration
Francesco Biscani
@bluescarni
Jun 23 2015 18:03 UTC
it is good because it removes complexity form the main insertion routine
you can forget about checking for the load factor
and avoid a branch
but you need to be able to guess in advance, which might not be so easy
Sumith Kulal
@Sumith1896
Jun 23 2015 18:04 UTC
Before I update to this, there is an error in evaluate_sparsity happening
0,11488
1,3605
2,1206
3,85
24ms
number of terms: 6272
expand2d: /usr/local/include/piranha/hash_set.hpp:784: piranha::hash_set<T, Hash, Pred>::~hash_set() [with T = SymEngine::my_pair; Hash = SymEngine::my_hash; Pred = SymEngine::hash_eq]: Assertion `sanity_check()' failed.

Abort caught. Printing stacktrace:

Traceback (most recent call last):

Done.
Aborted (core dumped)
Francesco Biscani
@bluescarni
Jun 23 2015 18:05 UTC
does this happen after you have started using the low -level methods?
Sumith Kulal
@Sumith1896
Jun 23 2015 18:05 UTC
Yes
It was fine before
Francesco Biscani
@bluescarni
Jun 23 2015 18:05 UTC
right it is probably because the number of elements is not updated
in debug mode, each time a hash table is destructed it will self check that everything is consistent
so the hash table iterates through the buckets and counts 6272 elements
Sumith Kulal
@Sumith1896
Jun 23 2015 18:06 UTC
Yes, it is not in evaluate_sparsity, my bad, it is in the destructor
Francesco Biscani
@bluescarni
Jun 23 2015 18:06 UTC
but the internal count of elements is probably zero because _update_size() was never called
so in general all methods in piranha which start with an underscore _ are considered low-level and potentially dangerous
and they are intended to be used only in low-level code when you really need to squeeze out as much as possible
so things like this can happen
Sumith Kulal
@Sumith1896
Jun 23 2015 18:09 UTC
Okay, this can be corrected by the above piece you mention
In that latest gist of yours, is it one purpose that the second variable named bucket is declared?
Also you cannot use it outside, it goes out of scope
Francesco Biscani
@bluescarni
Jun 23 2015 18:10 UTC
sorry that should read: bucket = C._bucket(temp);
it is the same variable defined above
Sumith Kulal
@Sumith1896
Jun 23 2015 18:10 UTC
Cool
Francesco Biscani
@bluescarni
Jun 23 2015 18:10 UTC
we need to recompute the bucket as now the bucket count has changed
Sumith Kulal
@Sumith1896
Jun 23 2015 18:11 UTC
On an average a lil bit speedup
I now hit 22ms
And quite considerable no. of times(though not always)
So this is the first optimization
Francesco Biscani
@bluescarni
Jun 23 2015 18:12 UTC
ok... there's another quick thing you can try
this is easier
just occurred to me
Sumith Kulal
@Sumith1896
Jun 23 2015 18:13 UTC
@bluescarni I think I should not be rehashing inside the poly_mul3 routine
That might be choking the speed
Francesco Biscani
@bluescarni
Jun 23 2015 18:14 UTC
you mean the rehash(10000)?
Sumith Kulal
@Sumith1896
Jun 23 2015 18:15 UTC
Yes
Put it out, now I consistently get 22ms
Not as good as I thought
Go on, you were telling something
Francesco Biscani
@bluescarni
Jun 23 2015 18:16 UTC
Try something like this:
so we pre-compute the begin and end iterators for B on top
there's a certain cost for getting to the begin iterator
Sumith Kulal
@Sumith1896
Jun 23 2015 18:17 UTC
ohh so you are using your iterators
Francesco Biscani
@bluescarni
Jun 23 2015 18:17 UTC
and you are paying that cost each time you go into the inner loop
the code you have with the auto also uses the iterators
but re-computes them each time
ah shit
you need to change the name of the it iterator
Sumith Kulal
@Sumith1896
Jun 23 2015 18:19 UTC
Yeah, I'll take care of those :smile:
Francesco Biscani
@bluescarni
Jun 23 2015 18:19 UTC
cheers
Sumith Kulal
@Sumith1896
Jun 23 2015 18:21 UTC
81ms, what's happening?
Francesco Biscani
@bluescarni
Jun 23 2015 18:21 UTC
that's weird, something is wrong
can you paste the code?
Sumith Kulal
@Sumith1896
Jun 23 2015 18:22 UTC
    const hash_set::iterator begin = B.begin(), end = B.end();
    for (auto &a: A) {
        for (hash_set::iterator itB = begin; itB != end; ++itB) {
            temp.first = a.first + itB->first;
            size_t bucket = C._bucket(temp);
            auto it = C._find(temp, bucket);
            if (it == C.end()) {
                // Check it the load factor of C is too large.
                if ((double(C.size()) + 1) / C.bucket_count() > C.max_load_factor()) {
                    // Increase the size of the table.
                    C._increase_size();
                    // Recompute the bucker.
                    bucket = C._bucket(temp);
                }
                temp.second = a.second * itB->second;
                C._unique_insert(temp, bucket);
                C._update_size(C.size() + 1u);
            } else {
                piranha::math::multiply_accumulate(it->second,a.second,it->second);
            }
        }
    }
(how do you prepare gists quickly?)
Francesco Biscani
@bluescarni
Jun 23 2015 18:23 UTC
Usually I just copy-paste directly into the website?
problem might be here: piranha::math::multiply_accumulate(it->second,a.second,it->second);
this should be piranha::math::multiply_accumulate(it->second,a.second,itB->second);
Sumith Kulal
@Sumith1896
Jun 23 2015 18:24 UTC
Aah, seems like I didn't take good care
Francesco Biscani
@bluescarni
Jun 23 2015 18:25 UTC
how do you paste the code here with the nice coloured syntax? That I don't know how to do :)
Sumith Kulal
@Sumith1896
Jun 23 2015 18:25 UTC
```
Type code here
Francesco Biscani
@bluescarni
Jun 23 2015 18:25 UTC
ahh cool cheers
Sumith Kulal
@Sumith1896
Jun 23 2015 18:26 UTC
```
between 3 back ticks above and below
Francesco Biscani
@bluescarni
Jun 23 2015 18:26 UTC
cool!
Sumith Kulal
@Sumith1896
Jun 23 2015 18:26 UTC
26 ms, no improvement
Francesco Biscani
@bluescarni
Jun 23 2015 18:27 UTC
mhm ok
seems like we might have to look at the other optimisation
Sumith Kulal
@Sumith1896
Jun 23 2015 18:27 UTC
Do you think we should go for the second optimization than taking care this?
Yes exactly what I wanted to say :smile:
Francesco Biscani
@bluescarni
Jun 23 2015 18:28 UTC
I think you can revert to auto at the moment yes
I would suggest to edit a new file for the new iteration, so you don't lose what you have so far
Sumith Kulal
@Sumith1896
Jun 23 2015 18:28 UTC
Yup
Francesco Biscani
@bluescarni
Jun 23 2015 18:29 UTC
let me check something
I can actually try to disable the optimisation in piranha if it is not too difficult
and see the performance impact
just to get an idea if we are on the right track
Sumith Kulal
@Sumith1896
Jun 23 2015 18:30 UTC
Cool, we were trying stuff like that on Piranha before
but never could successfully.
Francesco Biscani
@bluescarni
Jun 23 2015 18:42 UTC
mhm I cannot really see much difference in this benchmark with or without the second optimisation
I need to think a bit more about what is going on
Sumith Kulal
@Sumith1896
Jun 23 2015 18:47 UTC
Is there anything else we are missing?
Or that second optimization, the last one we have?
Francesco Biscani
@bluescarni
Jun 23 2015 18:50 UTC
I just tried disabling the second optimisation in Piranha, but there was no change in the benchmark time
I am trying to think if there is something else missing
Sumith Kulal
@Sumith1896
Jun 23 2015 18:50 UTC
That sounded like a good optimization too me, though I didn't get much of the internals
We are very close, just there
Francesco Biscani
@bluescarni
Jun 23 2015 18:51 UTC
it is, but a lot of stuff in Piranha is oriented towards very large polynomials
with 6000 terms a lot of the optimisation won't have any benefit
as everything fits in the cache memory
tricky part is understanding what affects this use case
Sumith Kulal
@Sumith1896
Jun 23 2015 18:56 UTC
Can we try out the second optimization here and see how it goes?
Francesco Biscani
@bluescarni
Jun 23 2015 19:03 UTC
I am afraid it will not change much, I would prefer to understand what is going on first
you have the code committed somewhere? I could take a look
Sumith Kulal
@Sumith1896
Jun 23 2015 19:07 UTC
Give me one minute
Ondřej Čertík
@certik
Jun 23 2015 19:32 UTC
Thanks @bluescarni and @Sumith1896 for looking into this. Can it be something with the term? Piranha is using its own data structure, so maybe we can try using it. Besides hashing, exponent packing (are we using exactly the same ones as Piranha?), hash table (we now use Piranha's) and integer (we now use piranha::integer), what else can possibly affect the performance of the mul_poly function?
The 2x speedup is too big to leave on the table. We need to figure out what is causing it.
Sumith Kulal
@Sumith1896
Jun 23 2015 19:39 UTC
Yes, my_pair was made up in no time.
Sumith Kulal
@Sumith1896
Jun 23 2015 19:48 UTC
Could others also run the benchmark?
Just to make sure I am not making some small error there.
Sumith Kulal
@Sumith1896
Jun 23 2015 19:54 UTC
@bluescarni I get 19ms when I change the initial rehash from 10,000 to 100,000
Minimum(not always, but better than previous)
Francesco Biscani
@bluescarni
Jun 23 2015 19:59 UTC
@sumith: what is the timing for piranha on your machine? 13 ms as on the report?
Sumith Kulal
@Sumith1896
Jun 23 2015 19:59 UTC
Yes
Francesco Biscani
@bluescarni
Jun 23 2015 20:00 UTC
mhm... rehashing to 100000 is too much, lots of wasted space
Sumith Kulal
@Sumith1896
Jun 23 2015 20:00 UTC
previously I was using my own but now I use the one implemented in master
Francesco Biscani
@bluescarni
Jun 23 2015 20:00 UTC
are you running the benchmarks in high priority and with the system at rest?
@certik my understanding is that Sumith's code is pretty close to what is in Piranha, I cannot understand where the discrepancy is coming from
we even have the exact same bucket layout
Sumith Kulal
@Sumith1896
Jun 23 2015 20:05 UTC
Piranha averaged 13.3 ms at complete rest with sudo nice -n -19
The new benchmark you implemented
expand2d in same conditions 21-22ms
and if I use rehash to 100,000, 18-19ms
Sumith Kulal
@Sumith1896
Jun 23 2015 20:14 UTC
@bluescarni Do you think my_pair is fine?
Francesco Biscani
@bluescarni
Jun 23 2015 20:14 UTC
I think so, the term class in Piranha is really thin.. it's a my_pair with a couple of methods on top
Sumith Kulal
@Sumith1896
Jun 23 2015 20:15 UTC
Hmm, we need to find other ways then
Francesco Biscani
@bluescarni
Jun 23 2015 20:19 UTC
just a shot in the dark, but what happens if you replace temp.second = a.second * b.second; with:
temp.second = a;
temp.second *= b.second;
btw, you are on a 64 bit machine right?
Sumith Kulal
@Sumith1896
Jun 23 2015 20:21 UTC
Yes
No change in timings
Why'd you expect it to change?
Francesco Biscani
@bluescarni
Jun 23 2015 20:23 UTC
I was thinking maybe the binary multiplication is slower that assignment + in-palce multiplication
it shouldn't in this case, but it's just how that piece of code is written in Piranha
so I thought to give it a go :)
Sumith Kulal
@Sumith1896
Jun 23 2015 20:23 UTC
Ohh Cool
I changed unsigned long long to long long
Because Piranha interfaces with signed
Didn't affect any speed though
Francesco Biscani
@bluescarni
Jun 23 2015 20:25 UTC
I must say it is really mysterious... you have anything funky in your compile flags?
Sumith Kulal
@Sumith1896
Jun 23 2015 20:28 UTC
I use cmake -DWITH_PIRANHA=yes -DWITH_MPFR=yes .
Francesco Biscani
@bluescarni
Jun 23 2015 20:28 UTC
in release mode right?
Sumith Kulal
@Sumith1896
Jun 23 2015 20:28 UTC
SymEngine by default is Release
Any extra care for Piranha?
Francesco Biscani
@bluescarni
Jun 23 2015 20:29 UTC
ok.. but then I am a bit surprised that you hit that assertion?
that is supposed to fire only in debug mode
can you post what comes out when you do make VERBOSE=1
while compiling I mean
Sumith Kulal
@Sumith1896
Jun 23 2015 20:31 UTC
Give me a minute
Okay, these come in SymEngine's CMake
C++ compiler: /usr/bin/c++
Build type: Release
C++ compiler flags: -std=c++0x -Wall -Wextra -fPIC -O3 -march=native -ffast-math -funroll-loops -Wno-unused-parameter
Francesco Biscani
@bluescarni
Jun 23 2015 20:33 UTC
ok.. but can you check the actual build flags used when doing make VERBOSE=1?
Sumith Kulal
@Sumith1896
Jun 23 2015 20:34 UTC
Yes the build is in progress, I'll make a gist of the whole build when it completes
[ 27%] Building CXX object src/CMakeFiles/symengine.dir/pow.cpp.o
cd /home/sumith/github/csympy/src && /usr/bin/c++    -std=c++0x -Wall -Wextra -fPIC -O3 -march=native -ffast-math -funroll-loops -Wno-unused-parameter -I/home/sumith/github/csympy/src -I/home/sumith/github/csympy/src/teuchos -I/usr/local/include    -o CMakeFiles/symengine.dir/pow.cpp.o -c /home/sumith/github/csympy/src/pow.cpp
Example^
Francesco Biscani
@bluescarni
Jun 23 2015 20:35 UTC
right... that might be it then
try adding -DNDEBUG
to the build flags
Sumith Kulal
@Sumith1896
Jun 23 2015 20:47 UTC
This is the happiest I have been in days!
Good catch @bluescarni
We are averaging 14-15ms
Francesco Biscani
@bluescarni
Jun 23 2015 20:48 UTC
good! it sounded fishy
good job!
Sumith Kulal
@Sumith1896
Jun 23 2015 20:49 UTC
All credits to you :smile:
Francesco Biscani
@bluescarni
Jun 23 2015 20:49 UTC
nah I merely shot some suggestions :p
but that is something you might want to fix in the symengine build process eventually
Sumith Kulal
@Sumith1896
Jun 23 2015 20:50 UTC
Go on
Francesco Biscani
@bluescarni
Jun 23 2015 20:50 UTC
The NDEBUG macro is actually part of the standard:
if it is defined, the asserts will be skipped
Sumith Kulal
@Sumith1896
Jun 23 2015 20:51 UTC
I saw couple of unused variable errors pop up in the test files
Should have guessed that
I don't think SymEngine has assert statements in the source
Francesco Biscani
@bluescarni
Jun 23 2015 20:52 UTC
right... the macro itself is supposed to turn on/off assertions, but in general people use it to signal a debug build
the library on which symengine depends might though
Sumith Kulal
@Sumith1896
Jun 23 2015 20:52 UTC
Yes
Francesco Biscani
@bluescarni
Jun 23 2015 20:52 UTC
as well the C++ standard library (not sure about that though)
so you might gain some extra performance in general by defining NDEBUG
Sumith Kulal
@Sumith1896
Jun 23 2015 20:53 UTC
So in Release mode, we have to switch it on, that's all right?
Francesco Biscani
@bluescarni
Jun 23 2015 20:53 UTC
that's what I usually do, but probably you should ask Ondrej before doing it as it's a rather pervasive thing to do
/usr/bin/c++ -std=c++11 -fdiagnostics-color=auto -ftemplate-depth=1024 -fvisibility-inlines-hidden -fvisibility=hidden -pthread -O3 -DNDEBUG
those are the flags active when I compile Piranha in release mode
Sumith Kulal
@Sumith1896
Jun 23 2015 20:55 UTC
Thanks for help in benchmarking.
Do you think we need to implement second optimization?
Francesco Biscani
@bluescarni
Jun 23 2015 20:57 UTC
we can discuss about it, but personally I would tend not to optimise until there are measurements that show a difference between piranha and symengine
it becomes hard to understand what influences performance and what doesn't if you don't have experimental evidence
Sumith Kulal
@Sumith1896
Jun 23 2015 20:58 UTC
At times Piranha hits 14ms too on my machine
We are just 1ms off in average
Francesco Biscani
@bluescarni
Jun 23 2015 20:58 UTC
I don't think it's significant, it's probably within the error margin
if you want you could try with a larger polynomial
instead of (x + y + z + w)**15 try to raise to 20 or something like this
Sumith Kulal
@Sumith1896
Jun 23 2015 21:00 UTC
Also there are couple of other things we need to take care
The size of the hashset etc
Now it is too premature a stage.
Francesco Biscani
@bluescarni
Jun 23 2015 21:01 UTC
yes.. it's a good prototype, and also still quite simple to understand
Sumith Kulal
@Sumith1896
Jun 23 2015 21:02 UTC
If @certik is satisfied with the speeds, I think we can layout the design. He basically wanted to reach Piranha at the lower level from a long time
One more thing @bluescarni
Can we handle negative exponents?
It accepts signed int, that's why, just confirming
Francesco Biscani
@bluescarni
Jun 23 2015 21:06 UTC
yes
the packing is built to deal with signed integers
Sumith Kulal
@Sumith1896
Jun 23 2015 21:06 UTC
Cool, how do you handle rational exponents?
Francesco Biscani
@bluescarni
Jun 23 2015 21:06 UTC
you can try encoding and decoding a vector of negative integers
rational exponents are just stored in a vector-like class, similar to std::vector
I haven't found a way of packing them, and I don't know if it would make sense
you have a lot of overhead with rational exponents because you have to normalise them, keep in canonical form, etc.
handling the exponents becomes the biggest bottleneck
Sumith Kulal
@Sumith1896
Jun 23 2015 21:09 UTC
So your module also has a tuple representation of polynomials?
Also when we were packing in powers of two, multiplication could lead to overflow, so we had planned of switching between tuple and packed representation of polynomial
But encode doesn't work in power of two, we need to think about that too
Francesco Biscani
@bluescarni
Jun 23 2015 21:10 UTC
there is a class in piranha called simply monomial which allows generic representation of exponents
encode and decode will tell you when there is an overflow (an exception will be raised)
you can have monomial<rational>, monomial<integer>, etc.
Sumith Kulal
@Sumith1896
Jun 23 2015 21:11 UTC
Cool
Francesco Biscani
@bluescarni
Jun 23 2015 21:12 UTC
of course you can do also monomial<int> if you want
but that will be slower than using the codification
Sumith Kulal
@Sumith1896
Jun 23 2015 21:12 UTC
Two questions:
  • Does monomial<symbol> exist?
  • The overflow might happen in process of multiplication and not encoding, is the exception still raised?
Sumith Kulal
@Sumith1896
Jun 23 2015 21:28 UTC
I'll go off for while now @bluescarni
Thanks for all the help today
Would not have been possible without you.
Francesco Biscani
@bluescarni
Jun 23 2015 21:51 UTC
no problem!
Francesco Biscani
@bluescarni
Jun 23 2015 22:00 UTC
for you questions: monomial<symbol> does not exist in Piranha currently (a symbol in Piranha is just a label, a string representing a symbolic variable)
when you multiply, you first need to check manually the overflow because, once you have encoded, the encoded value is just an integer. It cannot know about overflowing while adding two integral values that, in reality, represent a sequence of exponent
you will have to check for overflow before a poly multiplication, but it's a quick check to be done