Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 16 2019 21:51
    eisterman commented #66
  • Feb 08 2019 10:36
    Ploppz opened #66
  • Jul 23 2017 06:17
    quadrupleslap opened #65
  • Aug 21 2016 19:04
    Maplicant edited #64
  • Aug 21 2016 19:03
    Maplicant opened #64
  • Jun 04 2016 23:42
    SomeKittens commented #55
  • May 13 2016 18:14
    phdoerfler commented #46
  • May 02 2016 12:14
    alexandermorozov commented #37
  • May 02 2016 11:25
    alexandermorozov opened #63
  • May 01 2016 12:42
    alexandermorozov commented #37
  • Apr 30 2016 21:49
    alexandermorozov commented #33
  • Apr 30 2016 05:04
    bklooste commented #61
  • Apr 25 2016 12:35
    gallexme commented #61
  • Apr 23 2016 16:46
    alexandermorozov synchronize #62
  • Apr 23 2016 13:20
    alexandermorozov synchronize #62
  • Apr 19 2016 22:04
    alexandermorozov synchronize #62
  • Apr 19 2016 21:09
    alexandermorozov opened #62
  • Apr 17 2016 17:34
    alexandermorozov commented #37
  • Apr 17 2016 10:34
    MichaelHirn commented #37
  • Apr 14 2016 22:14
    alexandermorozov commented #37
Maximilian Goisser
@hobofan
I think it's a bit tricky to get it right, since you can't really just "test" if a backend works by just using it, since that will lead to a crash.
And I don't know quite enough about cautious runtime checking for availability of libraries.
bklooste
@bklooste
Hi did you guys resolve the performance issue #13 ? I cant find the rblas overhead test in trunk only the old one https://github.com/autumnai/collenchyma/tree/8b7a7aeeaf67482031da0fd712328f747be09e72/benches
bklooste
@bklooste
I keep thinking of using Collenchma in my own project ( https://github.com/bklooste/ANNe) , i tried initially but have diverged , we seem on different paths . The key difference is my focus on robotics .. I need 1) Feed back loops / directed graphs with interacting traditional code . 2) Looking at large apps with many interacting nets not single output 3) No GPU but will use multi core SIMD 4) Use different weights including f64 and bytes eg AVX512 can do 64 weights multiply and add in a single op per core ( and even common cpus have >12 cores) . At present it looks like a poor fit. I may use it within layers as its useful to have Cuda for backprop training,
Maximilian Goisser
@hobofan
I think all/most of those features are also important to us (we want to support embedded devices in the near future)
1/2: That should become much easier with Leaf 0.2. There layers become much more composable, and it should be much easier to build dynamic, interacting nets.
Maximilian Goisser
@hobofan
3: I don't think that's a big concern (at the moment). Generally SIMD should be handled a layer below Collenchyma by e.g. OpenBLAS. It should only really be a concern when we provide Native implementations ourself, but there I am not sure if the current state of SIMD support in Rust is there yet.
4: Primary concern ATM was getting the overall structure right, and just sticking with f32 throughout made that a lot simpler. In Collenchyma we made at least f64 available where the underlying libraries supported it. In Leaf the datatype of the Layers should become more generic in the future.
bklooste
@bklooste
Sorry been busy this week. Is there anywhere we can get a look at version 2.0 ? Also what is your structure do you work for a company ?
bklooste
@bklooste
I think it maybe best if i use Collenchyma from my layers , i use my own buffer memory manager at the moment . With regard to the other points 3) .. yes and no , SIMD allows very fine grained operations think 100 layers in a graph many very small but has some interesting requirements to achive good performance especially , interweaving / loading of data into simds registers.. And yes at present SIMD is pretty much a c intrinsic call .
My biggest issue is how back propagation needs such wide access that it just goes through so many structures , especially with back ends etc.
Maximilian Goisser
@hobofan
@vadixidav : we just released collenchyma-nn 0.3.0 which should solve your problems
Marco Z
@ocramz
Hi all ! I've just learnt about Autumn/Collenchyma. Really nice ! I do have a couple questions of general character (getting lost among all those Framework/Plugins/Hardware/Backend terminology in the guide): how exacly do you abstract over hardware? I mean, there has to be a separate implementation for each computation backend, amirite?
at first I thought there'd be some sort of virtualization but I must have misunderstood
Geordon Worley
@vadixidav
@hobofan Thanks for telling me. I am thinking I may publish it to crates.io only with genetic algorithms. At some point I may add this though.
Marco Z
@ocramz
Anyway, didn't mean to sound negative or confrontational, but I'd love if someone could provide a simple explanation of the interface between let's say the BLAS implementation and the user (especially what it's meant by "swapping the backend at runtime")
Maximilian Goisser
@hobofan
@ocramz Yes there has to be a seperate implementation for each backend
However we try to offload as much of the frameworks specifics as possible into the core of collenchyma so that plugins are as easy to write as possible and should look very similar between different backends
Maximilian Goisser
@hobofan
With "swapping the backend at runtime" we mean that you don't have to decide on the backend at compile time and can decide which backend to run on at runtime, depending on e.g. user input
Marco Z
@ocramz
Ok, now I'm getting it, thank you :)
bklooste
@bklooste
ok i see a big improvement in the way layers work. I think there is enough reason to switch now. Will comment on leaf gitter.
Maximilian Goisser
@hobofan
@bklooste Glad to hear that ;)
Jonathan Reem
@reem
Is this the right place to ask code questions about the collenchyma codebase?
Michael Hirn (MJ)
@MichaelHirn
Yes :)
Richard Diamond
@DiamondLovesYou
Hey guys, FYI I've found the cause to autumnai/collenchyma-nn#45 and fixed it (in addition to another issue with the tests) locally. I've got homework I've got to do first, but after that I'll create a PR for the fix.
Michael Hirn (MJ)
@MichaelHirn
That's awesome. Looking forward to the PR :+1:
Richard Diamond
@DiamondLovesYou
I've also created n-dimensional conv code for native. Finished it last weekend, but the tests incorrectly expected different results, so I had to borrow a friends desktop (which has an nVidia GPU) check what cuDNN generates. I've thus discovered my code was correct. Anyway, I decided to refactor the tests, and thus need to retest on my friends computer, which I can't do till Monday.
Michael Hirn (MJ)
@MichaelHirn
Yaaay! I was looking into conv for native a few days ago, but felt like it needed an interface for slicing to receive the proper spatial dimensions. So I thought we first had to implement the native memory of the SharedTensor via ndarray.
Really looking forward to see how you did it :clap:
Richard Diamond
@DiamondLovesYou
Thanks! W.r.t. slicing: such an interface would be a good idea anyway, having to manually handle strides was kinda annoying. I've haven't used ndarray, but taking a quick look, I'd say that would be a good direction to take as it provides the features I would have liked to have when I wrote it.
Richard Diamond
@DiamondLovesYou
Crap. I refactored the conv use generic types instead of copy pasted code for each type (I've fixed the native framework to be generic, at least w.r.t. convolutions), but in my infinite wisdom, I've forgotten about Cuda's backend.
bklooste
@bklooste
@DiamondLovesYou Richared did this PR make it into 2.1 ?
Michael Hirn (MJ)
@MichaelHirn
No it didn't. We didn't manage to review it, yet. But thanks for reminding me. I am on it.
Philipp Dörfler
@phdoerfler
I just wrote that into an issue but only afterwards realised there is a chat here. What do I have to do to get this to run on OSX? There is a pull request which has been merged on April 10th. Is that in 0.0.8 yet?
Bernhard Schuster
@drahnr
@DiamondLovesYou do have time for a chat?
you did some pretty great work there with the conv fixup
Richard Diamond
@DiamondLovesYou
@drahnr Hey, sure! And thanks!
(Sorry just saw this!)
Bernhard Schuster
@drahnr
@DiamondLovesYou no worries
I would like to have a quick chat with you about collemchyma this week if you are interested :)
mostly about design decisions that were made and if you are interested in further pushing it
bright-star
@bright-star
Hi all, are there any FPGA-specific backend plugins implemented or under development?
If not, I am interested in putting one together
bright-star
@bright-star
I see in https://github.com/autumnai/collenchyma-nn/blob/master/README.md#provided-operations none of the OpenCL operations are not implemented. Would that be a good place to start?
Bernhard Schuster
@drahnr
I also did some cleanup work there
and yes, starting to implement the features in OpenCL kernels would be a great start!
one hint would be looking into dual numbers for differentiation to make that as painfree as possible, but not necessarily
what ever you want to start with, I'd be pleased to review pull requests
Bernhard Schuster
@drahnr
also don't hesitate to ask questions and discuss stuff