Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Apr 15 15:56

    mandar2812 on master

    Updated TF Scala version. (compare)

  • Mar 19 13:28

    mandar2812 on master

    minor mods. (compare)

  • Mar 17 11:02

    mandar2812 on master

    Updated POMs (compare)

  • Mar 17 08:05

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04
    dependabot[bot] commented #304
  • Mar 17 08:04
    dependabot[bot] commented #303
  • Mar 17 08:04
    dependabot[bot] commented #302
  • Mar 17 08:04
    mandar2812 closed #304
  • Mar 17 08:04
    dependabot[bot] commented #301
  • Mar 17 08:04
    mandar2812 closed #303
  • Mar 17 08:04
    mandar2812 closed #302
  • Mar 17 08:04
    mandar2812 closed #301
  • Mar 17 08:04

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

Israel Herraiz
@iht
ok
Mandar Chandorkar
@mandar2812
Hey @/all i added a script scripts/cifar.sc in the latest commit you can run it by import $file.scripts.cifar
it tests a convolutional NN architecture on the CIFAR-10 data set
Israel Herraiz
@iht
it does not compile for me
cifar.sc:55: reference to Tensor is ambiguous;
it is imported twice in the same scope by
import org.platanios.tensorflow.api._
and import _root_.breeze.linalg.{copy, inv, squaredDistanceLowPrio, where, Counter, MatrixConstructors, logNormalize, Counter2Like, diagLowPrio2, SliceVectorOps, csvread, minMax, diag, DenseVector, MatrixNotSymmetricException, sum, scaleAdd, BroadcastedColumns, CanPadRight, any, logAndNormalize, View, cond, eigSym, princomp, LinearAlgebraException, qrp, LU, VectorizedReduceUFunc, reverse, tile, roll, SliceMatrixOps, ImmutableNumericOps, MatrixLike, Broadcasted, LowPrioritySliceMatrix, String2File, argmax, MatrixSingularException, logdet, StorageVector, reshape, SliceVector, max, norm, LowPriorityCounter2, VectorBuilder, MatrixEmptyException, strictlyUpperTriangular, pinvLowPrio, argtopk, VectorConstructors, BroadcastedLike, clip, HashVector, qr, csvwrite, SparseVector, LowPriorityMatrix, det, Axis, all, DenseMatrix, rank, pinv, cholesky, QuasiTensor, isClose, Options, accumulate, softmax, CounterLike, min, scale, Counter2, Transpose, RangeToRangeExtender, TensorLike, CanPadLeft, ptp, padLeft, product, NotConvergedException, linspace, zipValues, upperTriangular, ZippedValues, Matrix, svd, dim, functions, flipud, diagLowPrio, hsplit, rot90, eig, argsort, mapValues, TransposeLowPrio, randomDouble, usingNatives, VectorOps, operators, PCA, sumLowPrio, Tensor, mapValuesLowPrio, BitVector, CSCMatrix, Broadcaster, randn, $times, mmwrite, lowerTriangular, ranks, axpy, trace, MatrixNotSquareException, convert, split, shuffle, LSMR, LapackException, InjectNumericOps, rand, normalize, vsplit, VectorLike, mpow, kron, NumericOps, strictlyLowerTriangular, logDiff, diff, squaredDistance, fliplr, cross, diffLowPrio, unique, cov, argmin, mapActiveValues, RandomGeneratorUFunc, randomInt, padRight, BroadcastedRows, SliceMatrix, support}
  def accuracy(images: Tensor, labels: Tensor): Float = {
                       ^
it looks like Tensor is imported both by breeze and tensorflow
Mandar Chandorkar
@mandar2812
yes
i updated the default imports in the shell
transcendent-ai-labs/DynaML@8926071
breeze also has a Tensor
going ahead, since the packages which are included with DynaML are huge, it might be wise to have a small set of default imports and leave the rest to some presets which the user can load with simple commands
Israel Herraiz
@iht
it now compiles with the latest commits
I get these values:
Train accuracy = 0.4066 Test accuracy = 0.3805
Mandar Chandorkar
@mandar2812
good, so its working
Mandar Chandorkar
@mandar2812
Hi @/all just wanted to shine a light on the new user guide additions
This section outlines all the new additions in Tensorflow integration.
Israel Herraiz
@iht
Great, thanks for the update! Time to play with TF and DynaML :D
Mandar Chandorkar
@mandar2812
Hi @/all, check out the new release v1.5.3!
We have a new Data Set API
and the Inception v2 cell is available as a computational layer
Michel Lemay
@michellemay
Hi there, I just stumbled upon your nice library. Nice work!
I'm currently looking to implement BayesOptimization for hyper parameters using GPRegression and Expected Improvement acquisition function.
So, I'm seeking some guidance in order to make the most of DynaML. There is a lot of useful stuff in there and I don't want to waste time re-implementing routines that are already there.
Mandar Chandorkar
@mandar2812
Hi @michellemay glad you liked DynaML! Yes you are very welcome to implement BayesOptimization in DynaML, pul requests and contributions are very welcome!
So Bayes Opt is not implemented anywhere in DynaML
there is a global optimization API
which has GridSearch CoupledSimulatedAnnealing
The top level traits are GloballyOptimizable and GloballyOptimizableWithGrad
These are implemented by the models themselves (GP and others)
Mandar Chandorkar
@mandar2812
While GlobalOptimizer is the trait which GridSearch and CoupledSimulatedAnnealing implement
So your proposed BayesOptimization class can extend the GlobalOptimizer trait
So you can extend the GPRegression class and override the energy() method to compute the expected Improvement Acquisition
def energy(h: Map[String, Double],
             options: Map[String, String] = Map()): Double
here h is some particular value of hyper-parameters, options can be used to store other configuration items, (not needed in many cases)
Mandar Chandorkar
@mandar2812
If you want your algorithm to use gradient information then your model must extend GloballyOptimizableWithGrad and your model must implement the gradEnergy() method also.
def gradEnergy(h: Map[String, Double])
  : Map[String, Double] = Map()
Hope these is not too many details too soon! Let me know if you have any more questions.
Mandar Chandorkar
@mandar2812
Bayes Optimization was a idea I was interested in some time ago, but I never got the time to implement it, it would be great if you are interested to work on it.
Michel Lemay
@michellemay
Thanks for the details.. I was thinking about the same.
In bayesOpt, we have a multi step process: 1) sample the target function, 2) train GPRegression, 3) optimize the acquisition function, 4) evaluate target function at best point in 3 , 5) loop until convergence or max steps.
One thing I want to clarify, GlobalOptimizer is strongly dependent on energy landscape and this is a grid search approach.
How would you break that assumption?
Mandar Chandorkar
@mandar2812

Okay, I guess I was a bit confused. But your description helped to clear my thinking!

So you want to optimise f(x), by sampling it at some points Seq[I], then training a AbstractGPRegressionModel[I] on these points and using that model to propose new areas in the x:I domain

Michel Lemay
@michellemay
Another thing, the relation between energy(h) and training data DenseVectors is not quite clear. Is it only a direct mapping of the keys of the map into indices of the dense vector ?
Mandar Chandorkar
@mandar2812
Now I am thinking that you might not need to extend the GlobalOptimizer trait
I think your implementation can look as follows BayesOptimization[I](f: I => Double)
internally it can train a AbstractGPRegressionModel[I] on the samples xs: Seq[I]
and use the predictiveDistribution() method of GP to figure out which new samples are promising
so here f:I => Double is the potentially unknown black box function to optimize