@botev I checked out your arrayfire crate for GIR.. You aren’t doing any source/kernel generation, you’re simply using arrayfire Arrays instead of compiling the source to a kernel and then loading it in arrayfire.
Is there a reason for that?
Alexander Botev
@botev
you can not the kernel generation in Arrayfire
the reason is arrayfire is easy to get things going, as it implements this and works on anything
I'm currently working on the opencl bit
where kernel generation will happen
Arrayfire is a nice abstraction to use, and to show how the graph works, without needing to do kernel generation
jonysy
@jonysy
I understand. You’re basically creating a heavily optimized transpiler - which is a huge undertaking
Alexander Botev
@botev
what is transpiler? it also ilustrates how you can used the autodiff of gir in to other packages which already have numerical routines
jonysy
@jonysy
You aren’t essentially transpiling Rust to CL?
Alexander Botev
@botev
is that like translating?
im sorry never came across the term
jonysy
@jonysy
Yes, basically. I should probably use the word translator, anyway
Or compiler
Alexander Botev
@botev
then yes, however I like to think of it as a Meta-LLVM
llvm what it does is it takes some code in language X and translates it to binary for architecture Y
jonysy
@jonysy
A transpiler is a source-to-source language translator. So, it compiles (or translates, for that matter) a source to another source
Alexander Botev
@botev
here we abstract the architecture to a numerical framework (opencl, cuda, arrrayfire etc) which is a level above binary code
jonysy
@jonysy
Right. So a sigmoid function 1.0 / (1.0 + exp(-x)) in native Rust could compile to CL like such:
neverfox Oh well, it exists now, so I think we can consider NN in Rust solved. PyTorch API is excellent and uses graph based AD.
usamec
@usamec
Hi. Does GIR support creation of recurrent nets?
Alexander Botev
@botev
unfortunately I'm no longer developing that project due to being busy with my PhD
technically it is not too hard to add it, but it doesn't
usamec
@usamec
thank you for answer
Alexander Botev
@botev
sure no problem, I would generally suggest to use rust bindings to tensorflow/mxnet/pytorch if you need autodiff
unless you have a good reason not to
usamec
@usamec
I have made my own for CNTK, but I am still looking at the ecosystem. TF C API does not support gradient for while loops (but you can load model built from python). Not sure about mxnet and pytorch (I haven't seen any bindings yet).