## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
jonysy
@jonysy
@botev shouldn’t 2 be the expected output? https://github.com/jonysy/polynomial/blob/master/examples/demo.rs#L62
Alexander Botev
@botev
hmm now that I looked at the demo I think the Expect values are wrong in that block almost all of them
let me correct them I think I've used different values of a and b when I calculated by hand the expected
but yes you are correct that should be a 2
next should be 3 not 1
etc..
I think the program calculates them correctly but the string in the Expected is wrong
so it seems the three vales wrong are 78, should be 168, 0 should 2 and 1 should be 3
Alexander Botev
@botev
thanks for spotting the mistake
jonysy
@jonysy
No problem
jonysy
@jonysy
@botev Ok, so.. Gir would replace Collenchyma-nn, correct?
Alexander Botev
@botev
yes I guess to some extend that is the correct place I think
in the Leaf stack
jonysy
@jonysy
It all makes sense now :smile:
jonysy
@jonysy

@botev I started my own “gir” project here.

I tried to figure out the overall structure/design of the main Gir project, but gave up due to the lack of comments/tests in the project..

Alexander Botev
@botev
btw did you manage to comapre the runtimes with arrayfire against collenchyma?
jonysy
@jonysy
Not yet, but I'm sure my Collenchyma fork will out-perform Af. There are a lot of ways to optimize the shared tensor.
Alexander Botev
@botev
give it a try, arayfire has significnat optimizations on its own
jonysy
@jonysy
In general, are symbolic NNs faster than non-symbolic NNs? If so, why does Leaf outperform Tensorflow?
Alexander Botev
@botev
it used to outperform it back in the days, as tensorflow was not really symbolic then
jonysy
@jonysy
Would creating a graph-like container for Leaf make it symbolic?
Alexander Botev
@botev
potentially, but that depend what the graph, do - its main benefit is able to find and optimize intermediate computations
jonysy
@jonysy

.. find and optimize intermediate computations

Which is exactly what GIR does. Point taken.

I really want to take Leaf’s philosophy, so to speak, and merge it with a symbolic approach...

Alexander Botev
@botev
mmm you might want to look at pytorch then
I think it is more like what you describe
jonysy
@jonysy
I was actually looking at nngraph (Torch container)
Given your definition, that doesn't really make it symbolic either...?
"non-sequential" doesn't necessarily mean “symbolic", right?
Alexander Botev
@botev
nope
symbolic means that you have like a compilation phase
where you change the graph
and when you constructed it actually does not do any actual computation
but rather when you run it
Roman Pearah
@neverfox
does gir have any automatic optimizations at this stage?
like if it gets x * 1 will it just drop the multiplication?
or is it presumed that optimizations are the responsibility of something downstream?
Alexander Botev
@botev
so at this stage no
in general there should be 5 layer as in the LLVM:
1. Interface - since its written in Rust that does not exist in rust, but you can export it to Python, etc.. where it will have an API
2. IR - this is what currently is the gir_core
3. Backend agnostic optimization on the IR
4. Backend specific optimization - this will be downstream backend job
Roman Pearah
@neverfox
I didn't think it did
what would you say the relative impact is on performance of graph optimization vs just having the ability to calculate on a fast backend?
right, which is the MxNet model
Alexander Botev
@botev
so if the backend is very fast you can potentially go away with not too much impact
however memory optimization is not possible on the fly
since you don't know if you are not going to use something in the future, for gradients
when the graph is completed you can look back and say - ah this is no longer needed after step X so I can recycle memory
this is why for isntance Theano and Tensorflow have almost 50% memory usage compared to pytorch
Roman Pearah
@neverfox
gotcha
Alexander Botev
@botev
MXNet have even less as they have even more aggresive memory optimization