Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 28 2019 21:53

    drahnr on master

    chore/archive: add archive note (compare)

  • Dec 28 2019 21:52

    drahnr on master

    chore/archive: archive this rep… (compare)

  • Dec 28 2019 21:51

    drahnr on master

    chore/archive: closing this repo (compare)

  • Jun 18 2019 16:15
    drahnr commented #16
  • Jun 18 2019 09:44
    andreytkachenko commented #16
  • Apr 04 2019 01:41
    vadixidav opened #18
  • Dec 16 2018 13:44
    Xoronic closed #17
  • Dec 16 2018 13:44
    Xoronic commented #17
  • Dec 11 2018 17:32

    drahnr on master

    fix/README.md: remove link to s… (compare)

  • Dec 11 2018 17:16
    Xoronic edited #17
  • Dec 11 2018 17:16
    Xoronic edited #17
  • Dec 11 2018 17:15
    Xoronic opened #17
  • Aug 31 2018 16:45

    drahnr on master

    chore/version: release 0.2.1 (compare)

  • Apr 07 2018 06:19

    drahnr on master

    doc/badge: adjust path again (compare)

  • Apr 07 2018 06:17

    drahnr on master

    doc/badge: adjust path (compare)

  • Apr 06 2018 22:06

    sirmergealot on gh-pages

    doc/automatic: update (compare)

  • Apr 06 2018 22:04

    sirmergealot on gh-pages

    doc/automatic: update (compare)

  • Apr 06 2018 21:56
    drahnr commented #5
  • Apr 06 2018 21:56
    drahnr commented #5
  • Apr 06 2018 21:53

    drahnr on master

    fix/backends: prevent impl of :… (compare)

subversive-owl
@subversive-owl
looking at
extern "C" {
    pub fn cudnnConvolutionBackwardFilter(
        handle: cudnnHandle_t,
        alpha: *const ::libc::c_void,
        xDesc: cudnnTensorDescriptor_t,
        x: *const ::libc::c_void,
        dyDesc: cudnnTensorDescriptor_t,
        dy: *const ::libc::c_void,
        convDesc: cudnnConvolutionDescriptor_t,
        algo: cudnnConvolutionBwdFilterAlgo_t,
        workSpace: *mut ::libc::c_void,
        workSpaceSizeInBytes: usize,
        beta: *const ::libc::c_void,
        dwDesc: cudnnFilterDescriptor_t,
        dw: *mut ::libc::c_void,
    ) -> cudnnStatus_t;
}
subversive-owl
@subversive-owl
this is in rust-cudnn
Bernhard Schuster
@drahnr
the -sys version?
bindgen -> -sys
for the "normal" rust-cudnn
it is manually wrapped
subversive-owl
@subversive-owl
ah so at that point it's all calls directly to cudnn the external lib, got it
subversive-owl
@subversive-owl
ok, i feel confident about the backprop algo, but i'm unsure what the signature is for the convolution ops:
fn convolution_grad_filter(&self,
                               src_data: &SharedTensor<T>,
                               dest_diff: &SharedTensor<T>,
                               filter_diff: &mut SharedTensor<T>,
                               workspace: &mut SharedTensor<u8>,
                               config: &Self::CC)
                               -> Result<(), ::co::error::Error> {
        unimplemented!()
    }

    fn convolution_grad_data(&self,
                             filter: &SharedTensor<T>,
                             x_diff: &SharedTensor<T>,
                             result_diff: &mut SharedTensor<T>,
                             workspace: &mut SharedTensor<u8>,
                             config: &Self::CC)
                             -> Result<(), ::co::error::Error> {
        unimplemented!()
    }
subversive-owl
@subversive-owl
what am i working with here?
Bernhard Schuster
@drahnr
_grad_filter adjusts the filter weights
_grad_data adjusts the input weights
is that what you mean?
Tbh the naming convention of the variables is a bit confusing and I need to look them up again
I think I commented the trait definition, but let me double check
subversive-owl
@subversive-owl
hmm according to doc/coaster_nn/trait.Convolution.html it's not :(
what are workspace and config intended to be used for?
also, is a "filter" here as indicated in filter_diff the weights on a hidden layer?
Bernhard Schuster
@drahnr
workspace is just for intermediate calculations
this is mostly for cuda, so you can omit that
filter_diff yields the adjustments of the learned kernel
subversive-owl
@subversive-owl
so the weight deltas?
Bernhard Schuster
@drahnr
so this is just another name for weights I guess specific to the convolutional layer
subversive-owl
@subversive-owl
hmm so the hidden layer error is calculated once in convolution_grad_filter?
Bernhard Schuster
@drahnr
yes, as far as I understand, I'd have to look up the details but today was just too short to get around to it
subversive-owl
@subversive-owl
no problem
i'll try and sketch something out in code at least so we can compare later
Bernhard Schuster
@drahnr
:thumbsup:
subversive-owl
@subversive-owl
can i interact with SharedTensor using index ops?
subversive-owl
@subversive-owl
i want to do essentially
for (wgt, dst) in filter_diff.iter().zip(dest_diff.iter()) {

}
but i see that it doesn't implement anything for that
and i'm not sure which Device to reference to use the read/write methods
Bernhard Schuster
@drahnr
Index ops would be neat, but difficult to implement due to the varying dimensionality
self should have a bound function called device()

macro_rules! read {
    ($x:ident, $t:ident, $slf:ident) => (
        $x.read($slf.device()).unwrap().as_slice::<$t>()
    )
}

macro_rules! read_write {
    ($x:ident, $t: ident, $slf:ident) => (
        $x.read_write($slf.device()).unwrap().as_mut_slice::<$t>()
    )
}

macro_rules! write_only {
    ($x:ident, $t: ident, $slf:ident) => (
        $x.write_only($slf.device()).unwrap().as_mut_slice::<$t>()
    )
}
in coaster-nn under src/frameworks/native/helper.rs
subversive-owl
@subversive-owl
got it, thanks
subversive-owl
@subversive-owl
ok, i have something i can start working with at https://github.com/spearow/coaster-nn/tree/wip-native
have to prep for a coding test tomorrow, but will be back to it on the weekend
Bernhard Schuster
@drahnr
Alright, I'll check it out tomorrow night
Bernhard Schuster
@drahnr
Still a bit to go, if you want I can pass you my autodiff based notes. It is a bit tedious to calculate it parametrized over the dimensions of the tensor..
subversive-owl
@subversive-owl
oh yea, please do
Bernhard Schuster
@drahnr
I'll have plenty of time tomorrow night :)
subversive-owl
@subversive-owl
:thumbsup:
az8
@AZon8
heej just so you know, the example in the readme doesn't compile.
Bernhard Schuster
@drahnr
of coaster?
@AZon8 ^^^^
az8
@AZon8
yeah . cargo import coaster = "*" , coaster-nn = "*" , and copying the example gives me the errors "SharedTensor::<f32>::new(backend.device(), &(1, 1, 3)).unwrap(); expected 1 param " and the error "no method named add_device found for type `co::SharedTensor"
Bernhard Schuster
@drahnr
I'll dig into it within the next few days
Bernhard Schuster
@drahnr
@az8 essentially add_device got replaced by read{,_only}, write{,_only}
Bernhard Schuster
@drahnr
@az8 I did a first fix, but a hardware issue prevents me from digging deeper into this right now, will have to wait until next weekend