Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Alexander Botev
    @botev
    the expr_map
    maps and id of a node in the graph to an actual value
    so the internal_eval
    essnetially first inserts in the expr_map all inputs
    and parameter values
    and then loops trough the graph nodes in topological order
    and stores in the expr_map the value
    the actual evaluation is in compute_node
    which essentially for any op from GIR evaluates with the corresponding Arrayfire function
    by using the already computed arguments from its parents in expr_map
    jonysy
    @jonysy
    I'm going through your steps currently, so it may take a while to get back to you..
    Quick question..
    What do you mean by “leaf primitives”? The Layer and Solver structs?
    Alexander Botev
    @botev
    well thats the part where im not too sure as im not sure on how low level are leafs Layer
    for instance I guess if they have AffineLayer or smth like that that can be used for evaluating MatMul
    jonysy
    @jonysy
    I’m going to hard-fork Gir. Can you add a few benchmarks?
    Do you think it’ll be faster than what Leaf is using (most likely dual numbers)?
    Alexander Botev
    @botev
    so if it uses dual numbers for forward diff for larger matrices gir_af will be definately faster
    in general yes I should add some benchmarks
    Alexander Botev
    @botev
    if you want I can make the example with a deeper net
    and you can impl the same in your Leaf fork
    see how they compare
    jonysy
    @jonysy

    if you want I can make the example with a deeper net

    Yes, please!

    I won’t be busy (for most of) tomorrow. I plan on taking a deep dive into Leaf and Gir..
    Alexander Botev
    @botev
    Ok I've pushed a 6 layer net
    jonysy
    @jonysy
    K
    Alexander Botev
    @botev
    for me it seems that on the CPU the OpenCL arrayfire backend is faster than the CPU
    also you probably want to run it with --release
    anyway good night!
    jonysy
    @jonysy
    Night!
    jonysy
    @jonysy
    jonysy
    @jonysy
    @botev shouldn’t 2 be the expected output? https://github.com/jonysy/polynomial/blob/master/examples/demo.rs#L62
    Alexander Botev
    @botev
    hmm now that I looked at the demo I think the Expect values are wrong in that block almost all of them
    let me correct them I think I've used different values of a and b when I calculated by hand the expected
    but yes you are correct that should be a 2
    next should be 3 not 1
    etc..
    I think the program calculates them correctly but the string in the Expected is wrong
    so it seems the three vales wrong are 78, should be 168, 0 should 2 and 1 should be 3
    Alexander Botev
    @botev
    thanks for spotting the mistake
    jonysy
    @jonysy
    No problem
    jonysy
    @jonysy
    @botev Ok, so.. Gir would replace Collenchyma-nn, correct?
    Alexander Botev
    @botev
    yes I guess to some extend that is the correct place I think
    in the Leaf stack
    jonysy
    @jonysy
    It all makes sense now :smile:
    jonysy
    @jonysy

    @botev I started my own “gir” project here.

    I tried to figure out the overall structure/design of the main Gir project, but gave up due to the lack of comments/tests in the project..

    Alexander Botev
    @botev
    yeah i know, ive been quite busy and have to add comments around
    btw did you manage to comapre the runtimes with arrayfire against collenchyma?
    jonysy
    @jonysy
    Not yet, but I'm sure my Collenchyma fork will out-perform Af. There are a lot of ways to optimize the shared tensor.
    Alexander Botev
    @botev
    give it a try, arayfire has significnat optimizations on its own