Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    My approach to 'testing' has basically been, if it can handle my program, it probably works.
    Snektron
    @snektron:matrix.org
    [m]
    We have some large matches on sum types (without any payload) in our code, and futhark compiles these to if-else chains
    after some investigation, it turns out that gcc and clang (at least for the c backend) don't seem to optimize those into lookup tables
    What would be the best way to get something like that?
    I assume just let A = 0 let B = 1 let C = 2 .... and let lut = ['A', 'B', 'C']
    but thats not very... readable
    Troels Henriksen
    @athas
    Hm. That's unfortunate. There's no way to make the Futhark compiler generate switch statements, so manual lookup tables with arrays is your best bet.
    Snektron
    @snektron:matrix.org
    [m]
    It seems odd to me that num_bits and the other bit functions on i64 types return i32
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    To be fair, it's not like there are any data types with that have that many bits. i32 math is more efficient, and you could always convert as necessary.
    If I remember correctly iterate also uses i32. So I think it's pretty consistent.
    Orestis
    @omhepia:matrix.org
    [m]
    In case it is of any interest to anybody I made a small marching cube algorithm https://github.com/omalaspinas/futcubes. It's a quick and dirty port of a C code so it may not be incredible :-D But it seems to work with the C backend (haven't tested on any other backend yet).
    Troels Henriksen
    @athas
    Cool!
    Can that be used for visualisation?
    Orestis
    @omhepia:matrix.org
    [m]
    it creates a triangularized iso-surface of a scalar field
    Snektron
    @snektron:matrix.org
    [m]
    Oh thats pretty cool :)
    Troels Henriksen
    @athas
    By the way, if any of you have problems or ideas for things that can be solved with automatic differentiation, let me know. (Preferably not deep learning/backprop.) We're building a hammer and looking for some nails.
    2 replies
    Snektron
    @snektron:matrix.org
    [m]
    What does automatic differentiation entail? Just an expression and then computing a value regarding of its derivative?
    Troels Henriksen
    @athas
    At a high level, yes. Computing derivatives of (in principle) arbitrary functions at some point.
    Including functions that are not differentiable everywhere.
    Snektron
    @snektron:matrix.org
    [m]
    My first thought would be some physics application
    Troels Henriksen
    @athas
    It's historically used a lot for optimisation problems in engineering. You can also use it to convert distance functions into surface normal functions.
    1 reply
    Lots of cool things.
    Orestis
    @omhepia:matrix.org
    [m]
    I was wondering can one compute spatial derivatives of a discrete scalar field?
    Snektron
    @snektron:matrix.org
    [m]
    You could calculate normals when ray marching using automatic differentiation
    Troels Henriksen
    @athas
    Yeah, I'll certainly do that, but that's more of a fun toy than something people need, I think.
    Snektron
    @snektron:matrix.org
    [m]
    I'm running into this situation where my program uses 8.4 GB vram using the opencl backend, but only 5 using the cuda backend (running on different machines)
    are they using different kinds of allocators?
    Troels Henriksen
    @athas
    They shouldn't, but the amount of exploited parallelism can vary.
    They do not interrogate the hardware about its needs in the same way.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    I think I have a one use where AD would be useful. I have some interaction models for rigid bodies that are in scalar form (aka a potential energy) but finding the gradient seems really awkward for. What does it work on at this point?
    Troels Henriksen
    @athas
    Scalar funtions with control flow, but no loops.
    Also some sequential use of arrays.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    so no multivariate stuff yet?
    ok
    Troels Henriksen
    @athas
    Depends on your definition. It works fine with records and tuples, and you can have maps on top. But it'll be a week or two before we can handle most parallelism.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    tuples are fine, in my case anyway
    I'll probably try it later. Is everything based on analytical gradients?
    Troels Henriksen
    @athas
    What do you mean?
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    Maybe the word is different in english, but I mean like D(sin x) -> cos x
    not using numerical diffrentiation
    I mean you could use (f(x+h)-f(x) )/h numerically
    My understanding of AD is that it works based known gradients of the 'basic' functions at compile time. And the compiler makes an expression of the gradient of the composed function.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    But I don't know if that's the case for all cases.
    A quick look at the wikipedia page for AD tells me that everything is indeed 'analytical', which is nice.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    It's mostly just fancy use of the chain rule as far as I can tell.
    Orestis
    @omhepia:matrix.org
    [m]
    Well its much more than that. Finite difference are incredibly less accurate and much more prone to numerical error.
    And by numerical I mean the limited accuracy of computing machines as well as discretization errors.
    Gusten Theodor Isfeldt
    @Gusten_Isfeldt_gitlab
    Mostly at least. I guess there are some edgecases.
    It would have saved me a lot of time if Futhark had this feature a year ago. Really nice to see it coming anyhow.
    Troels Henriksen
    @athas
    AD is not numerical differentiation, but neither is it completely analytical. It is regarded as a mix. Operationally, it's analytical gradients at the primitive, combined with the chain rule.