Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 28 2019 21:53

    drahnr on master

    chore/archive: add archive note (compare)

  • Dec 28 2019 21:52

    drahnr on master

    chore/archive: archive this rep… (compare)

  • Dec 28 2019 21:51

    drahnr on master

    chore/archive: closing this repo (compare)

  • Jun 18 2019 16:15
    drahnr commented #16
  • Jun 18 2019 09:44
    andreytkachenko commented #16
  • Apr 04 2019 01:41
    vadixidav opened #18
  • Dec 16 2018 13:44
    Xoronic closed #17
  • Dec 16 2018 13:44
    Xoronic commented #17
  • Dec 11 2018 17:32

    drahnr on master

    fix/README.md: remove link to s… (compare)

  • Dec 11 2018 17:16
    Xoronic edited #17
  • Dec 11 2018 17:16
    Xoronic edited #17
  • Dec 11 2018 17:15
    Xoronic opened #17
  • Aug 31 2018 16:45

    drahnr on master

    chore/version: release 0.2.1 (compare)

  • Apr 07 2018 06:19

    drahnr on master

    doc/badge: adjust path again (compare)

  • Apr 07 2018 06:17

    drahnr on master

    doc/badge: adjust path (compare)

  • Apr 06 2018 22:06

    sirmergealot on gh-pages

    doc/automatic: update (compare)

  • Apr 06 2018 22:04

    sirmergealot on gh-pages

    doc/automatic: update (compare)

  • Apr 06 2018 21:56
    drahnr commented #5
  • Apr 06 2018 21:56
    drahnr commented #5
  • Apr 06 2018 21:53

    drahnr on master

    fix/backends: prevent impl of :… (compare)

subversive-owl
@subversive-owl
yes i am much less busy in a couple of months, then planning to devote many hours to it over the winter lull
Alexander Botev
@botev
is anyone aware of any existing code for a general reductions in OpenCL/Cuda
Bernhard Schuster
@drahnr
@botev what level are you looking for? there are test cases for OpenCL like this https://github.com/ekondis/cl2-reduce-bench/blob/master/reduction_kernels.cl
There is a more educational slide with an example: http://web.engr.oregonstate.edu/~mjb/cs575/Handouts/opencl.reduction.2pp.pdf
ViennaCL implemnts a few things too, but their performance might be not what one would expect (at least from the benchmarks I've seen)
Alexander Botev
@botev
I think those look fine to me
but ill need to read them in more detail
Bernhard Schuster
@drahnr
Quick update: Dealing with a lot of CI / networking related issues, I am finally back to actually working on juice itself rather than the meta things
Note: https://ci.spearow.io/teams/spearow/pipelines/juice still shows errors due to OpenCL allocations failing with the nvidia driver, yet to be investigated
subversive-owl
@subversive-owl
(anxiously waiting to see pureos in the pipeline)
Bernhard Schuster
@drahnr
Compilation is easy to integrate, test execution will have to wait until I figured out a way to pin the in-container driver/ABI/API versions to the host driver/ABI/API
I did not forget about it :smile:
it = pureos
subversive-owl
@subversive-owl
:D
subversive-owl
@subversive-owl
taking a swing at fashion-mnist this weekend, despite being busy all tomorrow day
Bernhard Schuster
@drahnr
:+1:
subversive-owl
@subversive-owl
ahh....the fashion-mnist data is in IDX
sigh
Bernhard Schuster
@drahnr
IDX? I
Is there a rust crate for that?
subversive-owl
@subversive-owl
yup, plugged it in
Bernhard Schuster
@drahnr
I just saw it :+1:
subversive-owl
@subversive-owl
how is the cudnn stuff generated?
looking at
extern "C" {
    pub fn cudnnConvolutionBackwardFilter(
        handle: cudnnHandle_t,
        alpha: *const ::libc::c_void,
        xDesc: cudnnTensorDescriptor_t,
        x: *const ::libc::c_void,
        dyDesc: cudnnTensorDescriptor_t,
        dy: *const ::libc::c_void,
        convDesc: cudnnConvolutionDescriptor_t,
        algo: cudnnConvolutionBwdFilterAlgo_t,
        workSpace: *mut ::libc::c_void,
        workSpaceSizeInBytes: usize,
        beta: *const ::libc::c_void,
        dwDesc: cudnnFilterDescriptor_t,
        dw: *mut ::libc::c_void,
    ) -> cudnnStatus_t;
}
subversive-owl
@subversive-owl
this is in rust-cudnn
Bernhard Schuster
@drahnr
the -sys version?
bindgen -> -sys
for the "normal" rust-cudnn
it is manually wrapped
subversive-owl
@subversive-owl
ah so at that point it's all calls directly to cudnn the external lib, got it
subversive-owl
@subversive-owl
ok, i feel confident about the backprop algo, but i'm unsure what the signature is for the convolution ops:
fn convolution_grad_filter(&self,
                               src_data: &SharedTensor<T>,
                               dest_diff: &SharedTensor<T>,
                               filter_diff: &mut SharedTensor<T>,
                               workspace: &mut SharedTensor<u8>,
                               config: &Self::CC)
                               -> Result<(), ::co::error::Error> {
        unimplemented!()
    }

    fn convolution_grad_data(&self,
                             filter: &SharedTensor<T>,
                             x_diff: &SharedTensor<T>,
                             result_diff: &mut SharedTensor<T>,
                             workspace: &mut SharedTensor<u8>,
                             config: &Self::CC)
                             -> Result<(), ::co::error::Error> {
        unimplemented!()
    }
subversive-owl
@subversive-owl
what am i working with here?
Bernhard Schuster
@drahnr
_grad_filter adjusts the filter weights
_grad_data adjusts the input weights
is that what you mean?
Tbh the naming convention of the variables is a bit confusing and I need to look them up again
I think I commented the trait definition, but let me double check
subversive-owl
@subversive-owl
hmm according to doc/coaster_nn/trait.Convolution.html it's not :(
what are workspace and config intended to be used for?
also, is a "filter" here as indicated in filter_diff the weights on a hidden layer?
Bernhard Schuster
@drahnr
workspace is just for intermediate calculations
this is mostly for cuda, so you can omit that
filter_diff yields the adjustments of the learned kernel
subversive-owl
@subversive-owl
so the weight deltas?
Bernhard Schuster
@drahnr
so this is just another name for weights I guess specific to the convolutional layer
subversive-owl
@subversive-owl
hmm so the hidden layer error is calculated once in convolution_grad_filter?
Bernhard Schuster
@drahnr
yes, as far as I understand, I'd have to look up the details but today was just too short to get around to it
subversive-owl
@subversive-owl
no problem
i'll try and sketch something out in code at least so we can compare later