Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Alexander Botev
    @botev
    it creates an instance of the function class which has access to the graph and to the parameters
    the eval method first checks that your inputs shapes make sence
    which provided by the verify_shapes from gir_core as it most likely will be common for any backend
    after this it calls the internal_eval, which in theory should never fail
    and after this it takes from the graph whatever was marked as outputs and returns them
    the expr_map
    maps and id of a node in the graph to an actual value
    so the internal_eval
    essnetially first inserts in the expr_map all inputs
    and parameter values
    and then loops trough the graph nodes in topological order
    and stores in the expr_map the value
    the actual evaluation is in compute_node
    which essentially for any op from GIR evaluates with the corresponding Arrayfire function
    by using the already computed arguments from its parents in expr_map
    jonysy
    @jonysy
    I'm going through your steps currently, so it may take a while to get back to you..
    Quick question..
    What do you mean by “leaf primitives”? The Layer and Solver structs?
    Alexander Botev
    @botev
    well thats the part where im not too sure as im not sure on how low level are leafs Layer
    for instance I guess if they have AffineLayer or smth like that that can be used for evaluating MatMul
    jonysy
    @jonysy
    I’m going to hard-fork Gir. Can you add a few benchmarks?
    Do you think it’ll be faster than what Leaf is using (most likely dual numbers)?
    Alexander Botev
    @botev
    so if it uses dual numbers for forward diff for larger matrices gir_af will be definately faster
    in general yes I should add some benchmarks
    Alexander Botev
    @botev
    if you want I can make the example with a deeper net
    and you can impl the same in your Leaf fork
    see how they compare
    jonysy
    @jonysy

    if you want I can make the example with a deeper net

    Yes, please!

    I won’t be busy (for most of) tomorrow. I plan on taking a deep dive into Leaf and Gir..
    Alexander Botev
    @botev
    Ok I've pushed a 6 layer net
    jonysy
    @jonysy
    K
    Alexander Botev
    @botev
    for me it seems that on the CPU the OpenCL arrayfire backend is faster than the CPU
    also you probably want to run it with --release
    anyway good night!
    jonysy
    @jonysy
    Night!
    jonysy
    @jonysy
    jonysy
    @jonysy
    @botev shouldn’t 2 be the expected output? https://github.com/jonysy/polynomial/blob/master/examples/demo.rs#L62
    Alexander Botev
    @botev
    hmm now that I looked at the demo I think the Expect values are wrong in that block almost all of them
    let me correct them I think I've used different values of a and b when I calculated by hand the expected
    but yes you are correct that should be a 2
    next should be 3 not 1
    etc..
    I think the program calculates them correctly but the string in the Expected is wrong
    so it seems the three vales wrong are 78, should be 168, 0 should 2 and 1 should be 3
    Alexander Botev
    @botev
    thanks for spotting the mistake
    jonysy
    @jonysy
    No problem
    jonysy
    @jonysy
    @botev Ok, so.. Gir would replace Collenchyma-nn, correct?
    Alexander Botev
    @botev
    yes I guess to some extend that is the correct place I think
    in the Leaf stack
    jonysy
    @jonysy
    It all makes sense now :smile: