However we try to offload as much of the frameworks specifics as possible into the core of collenchyma so that plugins are as easy to write as possible and should look very similar between different backends
With "swapping the backend at runtime" we mean that you don't have to decide on the backend at compile time and can decide which backend to run on at runtime, depending on e.g. user input
Marco Z
@ocramz
Ok, now I'm getting it, thank you :)
bklooste
@bklooste
ok i see a big improvement in the way layers work. I think there is enough reason to switch now. Will comment on leaf gitter.
Maximilian Goisser
@hobofan
@bklooste Glad to hear that ;)
Jonathan Reem
@reem
Is this the right place to ask code questions about the collenchyma codebase?
Michael Hirn (MJ)
@MichaelHirn
Yes :)
Richard Diamond
@DiamondLovesYou
Hey guys, FYI I've found the cause to autumnai/collenchyma-nn#45 and fixed it (in addition to another issue with the tests) locally. I've got homework I've got to do first, but after that I'll create a PR for the fix.
Michael Hirn (MJ)
@MichaelHirn
That's awesome. Looking forward to the PR :+1:
Richard Diamond
@DiamondLovesYou
I've also created n-dimensional conv code for native. Finished it last weekend, but the tests incorrectly expected different results, so I had to borrow a friends desktop (which has an nVidia GPU) check what cuDNN generates. I've thus discovered my code was correct. Anyway, I decided to refactor the tests, and thus need to retest on my friends computer, which I can't do till Monday.
Michael Hirn (MJ)
@MichaelHirn
Yaaay! I was looking into conv for native a few days ago, but felt like it needed an interface for slicing to receive the proper spatial dimensions. So I thought we first had to implement the native memory of the SharedTensor via ndarray.
Really looking forward to see how you did it :clap:
Richard Diamond
@DiamondLovesYou
Thanks! W.r.t. slicing: such an interface would be a good idea anyway, having to manually handle strides was kinda annoying. I've haven't used ndarray, but taking a quick look, I'd say that would be a good direction to take as it provides the features I would have liked to have when I wrote it.
Richard Diamond
@DiamondLovesYou
Crap. I refactored the conv use generic types instead of copy pasted code for each type (I've fixed the native framework to be generic, at least w.r.t. convolutions), but in my infinite wisdom, I've forgotten about Cuda's backend.
bklooste
@bklooste
@DiamondLovesYou Richared did this PR make it into 2.1 ?
Michael Hirn (MJ)
@MichaelHirn
No it didn't. We didn't manage to review it, yet. But thanks for reminding me. I am on it.
Philipp Dörfler
@phdoerfler
I just wrote that into an issue but only afterwards realised there is a chat here. What do I have to do to get this to run on OSX? There is a pull request which has been merged on April 10th. Is that in 0.0.8 yet?
Bernhard Schuster
@drahnr
@DiamondLovesYou do have time for a chat?
you did some pretty great work there with the conv fixup
Richard Diamond
@DiamondLovesYou
@drahnr Hey, sure! And thanks!
(Sorry just saw this!)
Bernhard Schuster
@drahnr
@DiamondLovesYou no worries
I would like to have a quick chat with you about collemchyma this week if you are interested :)
mostly about design decisions that were made and if you are interested in further pushing it
_
bright-star
@bright-star
Hi all, are there any FPGA-specific backend plugins implemented or under development?
and yes, starting to implement the features in OpenCL kernels would be a great start!
one hint would be looking into dual numbers for differentiation to make that as painfree as possible, but not necessarily
what ever you want to start with, I'd be pleased to review pull requests
Bernhard Schuster
@drahnr
also don't hesitate to ask questions and discuss stuff
bright-star
@bright-star
Awesome, thanks!
subversive-owl
@subversive-owl
Oh wow there is OpenCL interest
that's actually what i'm here to ask about :)
I have a virtex 5 here I can't get to show up with https://github.com/cogciprocate/ocl 's API, but if I can get it talking, I was interested in trying autumn on it
Bernhard Schuster
@drahnr
@subversive-owl the OpenCL implementation is just being started, there is still way to go
subversive-owl
@subversive-owl
I'm interested in helping! if I can get this FPGA working to test with
Bernhard Schuster
@drahnr
cool :)
I am currently investigating / experimenting on how to get automatic differentiation on board
if you feel like implementing layers that would be awesome :)
subversive-owl
@subversive-owl
sounds good! I haven't done much openCL but I've written parallelized linalg code before, so I can figure it out
Bernhard Schuster
@drahnr
sweet :)
don't hesitate to ask if you are stuck
subversive-owl
@subversive-owl
i'm installing the Xilinx ISE to see if that clears up the FPGA comm issues, then I can get started
Bernhard Schuster
@drahnr
and maybe tickets and PRs as you go so we can discuss on a per topic basis