Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 08:47

    Atry on 2.0.x

    Update README.md (compare)

  • 08:43

    Atry on template

    (compare)

  • 08:43

    Atry on sbt-no-parallel

    (compare)

  • 08:43

    Atry on 2.0.x

    Upgrade to sbt 1.1.1 Merge branch 'features/sbt-vers… Upgrade sbt-best-practice and 83 more (compare)

  • 08:43
    Atry closed #18
  • 08:43
    Atry closed #19
  • 08:35
    Atry synchronize #19
  • 08:35

    Atry on sbt-no-parallel

    Drop continuation project (compare)

  • 08:10
    Atry opened #19
  • 08:09

    Atry on sbt-no-parallel

    Disable parallel execution (compare)

  • 07:52
    Atry synchronize #18
  • 07:52

    Atry on template

    Upgrade sbt to 1.x (compare)

  • 07:21
    Atry synchronize #18
  • 07:21

    Atry on template

    Merge branch 'template' of http… (compare)

  • 07:06
    Atry opened #18
  • 07:06

    Atry on template

    Upgrade to sbt 1.1.1 Merge branch 'features/sbt-vers… Upgrade sbt-best-practice and 79 more (compare)

  • Sep 18 00:50
    scala-steward opened #17
  • Sep 09 16:41
    Atry added as member
  • Sep 06 16:11
    izhangzhihao removed as member
  • Sep 06 16:08
    Atry removed as member
杨博 (Yang Bo)
@Atry
Yes, refactoring in Jupyter is painful.
SemanticBeeng
@SemanticBeeng
yeah...
see this how Specs2 combines text with executable specs and snips out the parts of the code that are not to be shown in generated guide/html https://github.com/SemanticBeeng/fpinscalabe/blob/master/guide/src/test/scala/org/specs2/thirdparty/blogs/herdingcats/Checking_laws_with_Discipline.scala#L110-L112
杨博 (Yang Bo)
@Atry
However, incremental progrmming in a notebook is good.
SemanticBeeng
@SemanticBeeng
yes, that has its important place
SemanticBeeng
@SemanticBeeng
I will need to study all code, including the notebooks
will you be willing to look at Specs2 and how I use it?
then will try to implement models similar to DL4J and PyTorch and .. get hurt :-) at which time will come bug you with how they map from a functionality point of view...
杨博 (Yang Bo)
@Atry
If you are using Specs2, you may need to create a separate repository for you plugin instead of Github Gist, because Specs2 requires a little more build configuration to generate HTMLs.
Good luck and have fun! :smile:
SemanticBeeng
@SemanticBeeng
thank you for all
杨博 (Yang Bo)
@Atry

Let me know when your plugin is done. You can create a pull request on https://github.com/ThoughtWorksInc/DeepLearning.scala-website/blob/master/plugins.md , to add your plugin on this page.

Your plugin could be either a Github Gist(with a README.ipynb) or a library(with some BDD tests).

SemanticBeeng
@SemanticBeeng
oki
SemanticBeeng
@SemanticBeeng
is it a goal to introduce a layer of abstraction so we can replace nd4j ?
Mahout Samsara has a distributed linear algebra DSL : https://mahout.apache.org/users/environment/out-of-core-reference.html
杨博 (Yang Bo)
@Atry
The layer is called shapeless.Poly
SemanticBeeng
@SemanticBeeng
you mean using shapeless.Poly for this purpose is in plans?
杨博 (Yang Bo)
@Atry
I mean we had used Poly
So different underlying types are simply different implicit Cases for Poly
Different backends are simply different combination of plugins.
杨博 (Yang Bo)
@Atry
I know this approach is not source level compatible when switching different backend. But source level compatibility could be as easy as creating a plugin that contains type aliases from new backend types to old INDArray, INDArrayWeight or INDArrayLayer.
As you know, every plugin is optional and replaceable
SemanticBeeng
@SemanticBeeng
will look close to see how the Shapeless.Poly is used to create this mapping / DSL ; cannot (yet) see beyond the "source compatibility" situation. thanks
杨博 (Yang Bo)
@Atry
DeepLearning.scala has already shipped with two element-wise "backends", which are Float and Double, and one array-wise "backend", which is nd4j.
Common operations for the three types are defined in Operators.
SemanticBeeng
@SemanticBeeng
oki, this should help
杨博 (Yang Bo)
@Atry
"backends" implies you can switch implementation with little code change. However, the ability of switching implementation is available by default for all plugins in DeepLearning.scala. That is to say, every plugin is a "backend".
It is actually a solution for expression problem
SemanticBeeng
@SemanticBeeng
Certainly can appreciate the problem.
I assume the feature.scala project has a part in this or it is all done with Shapeless?
杨博 (Yang Bo)
@Atry
Mix-in a Scala built-in feature. Factory in feature.scala is just a utility to minimize boilerplate when you are creating a mix-in type.
Factory is optional. You can replace Factory[Xxx with Yyy].newInstance to native syntax like new Xxx with Yyy { /* some boilerplate code to make Scala compiler happy */ }
SemanticBeeng
@SemanticBeeng
how to best see how these computation expressions are covered in DeepLearning.scala : https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions ?
杨博 (Yang Bo)
@Atry
SemanticBeeng
@SemanticBeeng
yes, am aware of the project
and then DL4S uses only the ones there... oki
are all of those from F# covered? or just what DL4S needed?
杨博 (Yang Bo)
@Atry
Actually there are two kinds of computation expressions.
  • Applicative expressions are used to build static neural networks, which are normal function calls.
  • Monadic expressions are used to build dynamic neural networks, which can be written either with the help of ThoughtWorks Each or Scala's for comprehension.
SemanticBeeng
@SemanticBeeng
this helps, thank you!
杨博 (Yang Bo)
@Atry
F#'s computation expressions seem weird to me, because they contain too many functions, which are unnecessary in a standard implementation of monad.
I don't understand the purpose of its design.
Even though F#'s computation expression is not as expressive as ThoughtWorks Each. For example, F# does not support inline each calls inside a complex expression, which are allowed in ThoughtWorks Each.
SemanticBeeng
@SemanticBeeng
oki, thanks
SemanticBeeng
@SemanticBeeng
w.r.t. references scalaz.Tags.Parallel and scala.concurrent.ExecutionContext: given the rationale in https://github.com/ThoughtWorksInc/future.scala#design would it not make sense to use future.scala here?
not critical - just mining for your thinking
杨博 (Yang Bo)
@Atry
Threading mode free gives the freedom to its users, including the freedom of binding a threading mode on a Future.
Ghost
@ghost~586e0a8ed73408ce4f415e6f
Hi Atry, Could you please give me a short example of how can I convert a function which type is (... some parameters ..., T => Unit) => Unit into a function accepts parameters and then returns Future[T]? I was failed to find some code pieces on the internet doing the similar thing.
杨博 (Yang Bo)
@Atry
def convert[A, B, T](f: (A, B, T => Unit) => Unit)(a: A, b: B): Future[T] = {
  Future.async { handler =>
    f(a, b, { t =>
      handler(Success(t))
    })
  }
}