I keep thinking of using Collenchma in my own project ( https://github.com/bklooste/ANNe) , i tried initially but have diverged , we seem on different paths . The key difference is my focus on robotics .. I need 1) Feed back loops / directed graphs with interacting traditional code . 2) Looking at large apps with many interacting nets not single output 3) No GPU but will use multi core SIMD 4) Use different weights including f64 and bytes eg AVX512 can do 64 weights multiply and add in a single op per core ( and even common cpus have >12 cores) . At present it looks like a poor fit. I may use it within layers as its useful to have Cuda for backprop training,
Maximilian Goisser
@hobofan
I think all/most of those features are also important to us (we want to support embedded devices in the near future)
1/2: That should become much easier with Leaf 0.2. There layers become much more composable, and it should be much easier to build dynamic, interacting nets.
Maximilian Goisser
@hobofan
3: I don't think that's a big concern (at the moment). Generally SIMD should be handled a layer below Collenchyma by e.g. OpenBLAS. It should only really be a concern when we provide Native implementations ourself, but there I am not sure if the current state of SIMD support in Rust is there yet.
4: Primary concern ATM was getting the overall structure right, and just sticking with f32 throughout made that a lot simpler. In Collenchyma we made at least f64 available where the underlying libraries supported it. In Leaf the datatype of the Layers should become more generic in the future.
bklooste
@bklooste
Sorry been busy this week. Is there anywhere we can get a look at version 2.0 ? Also what is your structure do you work for a company ?
bklooste
@bklooste
I think it maybe best if i use Collenchyma from my layers , i use my own buffer memory manager at the moment . With regard to the other points 3) .. yes and no , SIMD allows very fine grained operations think 100 layers in a graph many very small but has some interesting requirements to achive good performance especially , interweaving / loading of data into simds registers.. And yes at present SIMD is pretty much a c intrinsic call .
My biggest issue is how back propagation needs such wide access that it just goes through so many structures , especially with back ends etc.
Maximilian Goisser
@hobofan
@vadixidav : we just released collenchyma-nn 0.3.0 which should solve your problems
Marco Z
@ocramz
Hi all ! I've just learnt about Autumn/Collenchyma. Really nice ! I do have a couple questions of general character (getting lost among all those Framework/Plugins/Hardware/Backend terminology in the guide): how exacly do you abstract over hardware? I mean, there has to be a separate implementation for each computation backend, amirite?
at first I thought there'd be some sort of virtualization but I must have misunderstood
Geordon Worley
@vadixidav
@hobofan Thanks for telling me. I am thinking I may publish it to crates.io only with genetic algorithms. At some point I may add this though.
Marco Z
@ocramz
Anyway, didn't mean to sound negative or confrontational, but I'd love if someone could provide a simple explanation of the interface between let's say the BLAS implementation and the user (especially what it's meant by "swapping the backend at runtime")
Maximilian Goisser
@hobofan
@ocramz Yes there has to be a seperate implementation for each backend
However we try to offload as much of the frameworks specifics as possible into the core of collenchyma so that plugins are as easy to write as possible and should look very similar between different backends
Maximilian Goisser
@hobofan
With "swapping the backend at runtime" we mean that you don't have to decide on the backend at compile time and can decide which backend to run on at runtime, depending on e.g. user input
_
Marco Z
@ocramz
Ok, now I'm getting it, thank you :)
bklooste
@bklooste
ok i see a big improvement in the way layers work. I think there is enough reason to switch now. Will comment on leaf gitter.
Maximilian Goisser
@hobofan
@bklooste Glad to hear that ;)
Jonathan Reem
@reem
Is this the right place to ask code questions about the collenchyma codebase?
Michael Hirn (MJ)
@MichaelHirn
Yes :)
Richard Diamond
@DiamondLovesYou
Hey guys, FYI I've found the cause to autumnai/collenchyma-nn#45 and fixed it (in addition to another issue with the tests) locally. I've got homework I've got to do first, but after that I'll create a PR for the fix.
Michael Hirn (MJ)
@MichaelHirn
That's awesome. Looking forward to the PR :+1:
Richard Diamond
@DiamondLovesYou
I've also created n-dimensional conv code for native. Finished it last weekend, but the tests incorrectly expected different results, so I had to borrow a friends desktop (which has an nVidia GPU) check what cuDNN generates. I've thus discovered my code was correct. Anyway, I decided to refactor the tests, and thus need to retest on my friends computer, which I can't do till Monday.
Michael Hirn (MJ)
@MichaelHirn
Yaaay! I was looking into conv for native a few days ago, but felt like it needed an interface for slicing to receive the proper spatial dimensions. So I thought we first had to implement the native memory of the SharedTensor via ndarray.
Really looking forward to see how you did it :clap:
Richard Diamond
@DiamondLovesYou
Thanks! W.r.t. slicing: such an interface would be a good idea anyway, having to manually handle strides was kinda annoying. I've haven't used ndarray, but taking a quick look, I'd say that would be a good direction to take as it provides the features I would have liked to have when I wrote it.
Richard Diamond
@DiamondLovesYou
Crap. I refactored the conv use generic types instead of copy pasted code for each type (I've fixed the native framework to be generic, at least w.r.t. convolutions), but in my infinite wisdom, I've forgotten about Cuda's backend.
bklooste
@bklooste
@DiamondLovesYou Richared did this PR make it into 2.1 ?
Michael Hirn (MJ)
@MichaelHirn
No it didn't. We didn't manage to review it, yet. But thanks for reminding me. I am on it.
Philipp Dörfler
@phdoerfler
I just wrote that into an issue but only afterwards realised there is a chat here. What do I have to do to get this to run on OSX? There is a pull request which has been merged on April 10th. Is that in 0.0.8 yet?
Bernhard Schuster
@drahnr
@DiamondLovesYou do have time for a chat?
you did some pretty great work there with the conv fixup
Richard Diamond
@DiamondLovesYou
@drahnr Hey, sure! And thanks!
(Sorry just saw this!)
Bernhard Schuster
@drahnr
@DiamondLovesYou no worries
I would like to have a quick chat with you about collemchyma this week if you are interested :)
mostly about design decisions that were made and if you are interested in further pushing it
bright-star
@bright-star
Hi all, are there any FPGA-specific backend plugins implemented or under development?