Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    jonysy
    @jonysy
    I’m going to hard-fork Gir. Can you add a few benchmarks?
    Do you think it’ll be faster than what Leaf is using (most likely dual numbers)?
    Alexander Botev
    @botev
    so if it uses dual numbers for forward diff for larger matrices gir_af will be definately faster
    in general yes I should add some benchmarks
    Alexander Botev
    @botev
    if you want I can make the example with a deeper net
    and you can impl the same in your Leaf fork
    see how they compare
    jonysy
    @jonysy

    if you want I can make the example with a deeper net

    Yes, please!

    I won’t be busy (for most of) tomorrow. I plan on taking a deep dive into Leaf and Gir..
    Alexander Botev
    @botev
    Ok I've pushed a 6 layer net
    jonysy
    @jonysy
    K
    Alexander Botev
    @botev
    for me it seems that on the CPU the OpenCL arrayfire backend is faster than the CPU
    also you probably want to run it with --release
    anyway good night!
    jonysy
    @jonysy
    Night!
    jonysy
    @jonysy
    jonysy
    @jonysy
    @botev shouldn’t 2 be the expected output? https://github.com/jonysy/polynomial/blob/master/examples/demo.rs#L62
    Alexander Botev
    @botev
    hmm now that I looked at the demo I think the Expect values are wrong in that block almost all of them
    let me correct them I think I've used different values of a and b when I calculated by hand the expected
    but yes you are correct that should be a 2
    next should be 3 not 1
    etc..
    I think the program calculates them correctly but the string in the Expected is wrong
    so it seems the three vales wrong are 78, should be 168, 0 should 2 and 1 should be 3
    Alexander Botev
    @botev
    thanks for spotting the mistake
    jonysy
    @jonysy
    No problem
    jonysy
    @jonysy
    @botev Ok, so.. Gir would replace Collenchyma-nn, correct?
    Alexander Botev
    @botev
    yes I guess to some extend that is the correct place I think
    in the Leaf stack
    jonysy
    @jonysy
    It all makes sense now :smile:
    jonysy
    @jonysy

    @botev I started my own “gir” project here.

    I tried to figure out the overall structure/design of the main Gir project, but gave up due to the lack of comments/tests in the project..

    Alexander Botev
    @botev
    yeah i know, ive been quite busy and have to add comments around
    btw did you manage to comapre the runtimes with arrayfire against collenchyma?
    jonysy
    @jonysy
    Not yet, but I'm sure my Collenchyma fork will out-perform Af. There are a lot of ways to optimize the shared tensor.
    Alexander Botev
    @botev
    give it a try, arayfire has significnat optimizations on its own
    jonysy
    @jonysy
    In general, are symbolic NNs faster than non-symbolic NNs? If so, why does Leaf outperform Tensorflow?
    Alexander Botev
    @botev
    it used to outperform it back in the days, as tensorflow was not really symbolic then
    jonysy
    @jonysy
    Would creating a graph-like container for Leaf make it symbolic?
    Alexander Botev
    @botev
    potentially, but that depend what the graph, do - its main benefit is able to find and optimize intermediate computations
    jonysy
    @jonysy

    .. find and optimize intermediate computations

    Which is exactly what GIR does. Point taken.

    I really want to take Leaf’s philosophy, so to speak, and merge it with a symbolic approach...

    Alexander Botev
    @botev
    mmm you might want to look at pytorch then
    I think it is more like what you describe
    jonysy
    @jonysy
    I was actually looking at nngraph (Torch container)
    Given your definition, that doesn't really make it symbolic either...?
    "non-sequential" doesn't necessarily mean “symbolic", right?
    Alexander Botev
    @botev
    nope
    symbolic means that you have like a compilation phase
    where you change the graph
    and when you constructed it actually does not do any actual computation
    but rather when you run it