Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 20 04:17
    scala-steward opened #254
  • Aug 08 01:58
    baifachuan added as member
  • Jul 21 09:43
    Atry synchronize #253
  • Jul 21 09:43

    Atry on Atry-patch-1

    Update .travis.yml (compare)

  • Jul 20 08:41
    Atry synchronize #253
  • Jul 20 08:41

    Atry on Atry-patch-1

    Update .travis.yml (compare)

  • Jul 20 07:05
    Atry opened #253
  • Jul 20 07:05

    Atry on Atry-patch-1

    Update macOS version (compare)

  • Jul 08 17:49
    scala-steward opened #252
  • Jun 20 08:04
    scala-steward opened #251
  • Jun 20 08:04
    scala-steward opened #250
  • Jun 10 18:29
    scala-steward opened #249
  • Jun 07 18:06
    scala-steward opened #248
  • Jun 01 18:11
    scala-steward opened #247
  • May 29 02:14
    scala-steward opened #246
  • May 23 03:01
    scala-steward opened #245
  • May 06 03:40
    scala-steward opened #244
  • Apr 08 22:51
    scala-steward opened #243
  • Apr 06 22:56
    scala-steward opened #242
  • Apr 06 10:13
    scala-steward opened #241
SemanticBeeng
@SemanticBeeng
I do not know how automatic differentiation works (or is different) on multiple variables as opposed to on one - was hoping you would know and know how the DL4S engines supports this
Will dig further then
杨博 (Yang Bo)
@Atry
I don't know what is DL4S
SemanticBeeng
@SemanticBeeng
sorry DeepLearning.scala
"Madness: a package for Multivariate Automatic Differentiation" : https://cran.r-project.org/web/packages/madness/vignettes/introducing_madness.pdf
an example of impl explicitly built for multiple variables
anyway, will dig
杨博 (Yang Bo)
@Atry

https://static.javadoc.io/com.thoughtworks.deeplearning/deeplearning_2.11/2.0.1/com/thoughtworks/deeplearning/DeepLearning.html

Common differentiable types that supports DeepLearning are:

Float, FloatWeight or FloatLayer
Double, DoubleWeight or DoubleLayer
INDArray, INDArrayWeight or INDArrayLayer

SemanticBeeng
@SemanticBeeng
Aha, thanks
SemanticBeeng
@SemanticBeeng
So the question really is how/if nd4j supports multivariate/vector calculus
杨博 (Yang Bo)
@Atry
nd4j is not differentiable. You may create plugins of new differentiable layers from raw nd4j operators. See CNN plugin for example.
SemanticBeeng
@SemanticBeeng
the gist of my ignorance is about the differences between how automatic differentiation works with a single variable vs a vector of variables
was hoping to find a code example here
but will have to digg deeper to learn. thanks
杨博 (Yang Bo)
@Atry
Actually no difference here.
SemanticBeeng
@SemanticBeeng
thank you, good to know.
Koen Dejonghe
@koen-dejonghe
Hi, I created Scorch, a deep learning framework in Scala inspired by Pytorch. It has automatic differentiation built in and follows an imperative coding style. Scorch lives here: https://github.com/botkop/scorch. I value your feedback. Thank you.
杨博 (Yang Bo)
@Atry
@koen-dejonghe It seems too complicated to me
Koen Dejonghe
@koen-dejonghe
@Atry Thank you for the feedback. Could you give me an indication or an example of what exactly you think is too complex? Much appreciated.
杨博 (Yang Bo)
@Atry
@koen-dejonghe The Module/subModules concepts are implementation detail. A deep learning framework should be able to create computational graph from arbitrary differentiable expression, instead of forcing useres explicitly creating them.
We actually has concepts similar to Module/subModules, in pre-1.0 version of DeepLearning.scala.
Koen Dejonghe
@koen-dejonghe
@Atry You don't need a Module to create a computation graph. All you need are Variables and Functions, and you can back prop in them. See https://github.com/botkop/scorch/blob/master/src/test/scala/scorch/autograd/AutoGradSpec.scala for examples.
Modules are just a convenience class for collecting parameters and wrapping Functions.
杨博 (Yang Bo)
@Atry
Ah, I see
The pytorch-like API looks more concise than Deeplearning4j. Great work!
Koen Dejonghe
@koen-dejonghe
Thanks.
DL4J, like TF, has a symbolic programming style, while pyTorch/scorch have an imperative coding style.
Which I think is much more intuitive
mghildiy
@mghildiy
Hi
I cloned Compute.scala
I try to import in Intellij, but got issues
Scala plugin version is:2017.2.13
杨博 (Yang Bo)
@Atry
Try to upgrade?
mghildiy
@mghildiy
Upgrade what?
I just upgraded IntelliJ
杨博 (Yang Bo)
@Atry
Good
mghildiy
@mghildiy
and Scala plugin too is latest one, as shown by Intellij
but still not able to import the project
杨博 (Yang Bo)
@Atry
The latest Intellij should be 2018.x
mghildiy
@mghildiy
ok
Mine is:2018.1
杨博 (Yang Bo)
@Atry
You can reopen https://youtrack.jetbrains.com/issue/SCL-12901 if it still does not work in 2018.1
mghildiy
@mghildiy
ok
Anthony Platanios
@eaplatanios
I don't know if this project is still alive, but if it is, would you guys be interested to add support for TensorFlow Scala?
doofin
@doofin
what kind of reverse mode ad does this project use?Is it implemented with tape based reverse ad?
杨博 (Yang Bo)
@Atry
Similar to that but not actual linear tape
doofin
@doofin
thanks!
doofin
@doofin
Hi,are there up to date sbt tutorial ? https://github.com/izhangzhihao/deeplearning-tutorial is too old