I am also writing a program that allows you to interactively type in and modify an SDF, which is then interpreted (and differentiated for shading) by a Futhark program.
But it's a bit slow. Raymarching is super compute intensive.
Interesting problem, I hadn't even considered that way of creating flexibility in a Futhark program. I could only see splitting the program into smaller parts that can be called from another language, so this is pretty inspiring.
Troels Henriksen
@athas
The interpretive overhead on CPU is 5x and 10x on GPU, but I think I can come up with a better design.
the use cases will probably be limited, but the idea of applying some user defined function in a futhark kernel is something I have considered, actually for a pretty similar appllication to this.
There is this haskell library called ImplicitCAD, that basically outputs geometry(stl-files) based on equations, but it runs on CPU and is quite sluggish for some things that I tried.
I suppose in most cases the visualisation is the end goal anyway, so then raymarching is more straight to the point, but for, say 3D-printing, it's pretty nice.
Troels Henriksen
@athas
The original idea behind having an interpreter in Futhark was for "pricing engines". Many complex financial contracts are basically small programs.
And you might have so many contracts, and they may change so often, that it makes no sense to compile specialised code.
That certainly seems like a valid application. For some reason I just find the idea of interpretation on GPU to be a bit fascinating.
I guess the fact that we can also differentiate interpreted programs on GPU made it even more so.
Troels Henriksen
@athas
Yes, that was actually the main thing I wanted to make sure worked!
I am still enthralled by the notion of differentiating something like an interpreter.
Tobias Røikjer
@TobiasRoikjer
I have this one function which I use as a higher order function
let f (a: i32) (b: i32) (c: i32): i32 =
a+b+clet g (func) =
replicate 5 (func123)
and I invoke it as e.g. g (f)
However, say I want to decorate/annotate the f function to e.g. multiply by two
let mul2(x)=
x * 2
How can I then make this be the supplied function? Is there some "varargs" in futhark? To invoke it as g (f |> mul2)
Troels Henriksen
@athas
You can define your own zoo of combinators to make it more concise, but usually I just write out the lambda.
Tobias Røikjer
@TobiasRoikjer
Thank you, so I take it there is no sorta "varargs" macro? I.e. a kind of abstract polymorphism to function parameter length
Troels Henriksen
@athas
There is not.
Tobias Røikjer
@TobiasRoikjer
sorry if this is an obvious question, but I cannot find an example. I am writing a generic function that takes in an integer and I want to convert a bool to that type of integer in the function. I.e. if it was ant int32 I would use int32.bool. Is there a way of annotating that the type must be an integer or rational. I pressume it is straight forward seeing https://futhark-lang.org/pkgs/github.com/athas/matte/0.1.2/doc/prelude/math.html#1753 but I am not sure
Troels Henriksen
@athas
You mean bounded polymorphism, as in Haskell's type classes? This is only possible through the module system.
But if you want something more lightweight, you can ask the user to pass in a bool-to-number function. E.g. def foo 't (x: t) (to_t: bool -> t) = ...
Tobias Røikjer
@TobiasRoikjer
ah that is great. So Futhark can infer the type interface by constraints like these?
I had my computer crash when I ran a futhark program on my AMD gpu (works, but is slow on my old nvidia gpu). I don't think this is a futhark problem, but I wonder what the issue could be. The power draw should be well within what the PSU can handle, especially since I wasn't running anything heavy on other components. Do you know any 'known good' compute loads to test maximum power draw, just to rule that out as a cause? I have previously run other futhark programs on it without issue.
Troels Henriksen
@athas
I don't know what the most power intensive GPU program would be like. I assume something that maxes out both memory traffic and compute, which isn't an easy combination. Does it consistently crash with that program?
I have had many random crashes with GPU drivers. Did you get a kernel panic or did the machine just reboot?
the PSU is rated for 1300w, so it really shouldn't be the issue, but I am not sure of exactly how the power is delivered. It's a Dell workstation where it's pretty hard to access the cables except for those intended for expansion.
Troels Henriksen
@athas
I have experienced that with AMD GPUs where the problem was certainly not power delivery. There was a version of Mesa where just initialising the OpenCL context would trigger a reboot.
This doesn't tell you what is going wrong, but it means that AMDs driver (or hardware) has bugs where userspace behaviour can cause reboots.