Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Apr 15 15:56

    mandar2812 on master

    Updated TF Scala version. (compare)

  • Mar 19 13:28

    mandar2812 on master

    minor mods. (compare)

  • Mar 17 11:02

    mandar2812 on master

    Updated POMs (compare)

  • Mar 17 08:05

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04

    dependabot[bot] on maven

    (compare)

  • Mar 17 08:04
    dependabot[bot] commented #304
  • Mar 17 08:04
    dependabot[bot] commented #303
  • Mar 17 08:04
    dependabot[bot] commented #302
  • Mar 17 08:04
    mandar2812 closed #304
  • Mar 17 08:04
    dependabot[bot] commented #301
  • Mar 17 08:04
    mandar2812 closed #303
  • Mar 17 08:04
    mandar2812 closed #302
  • Mar 17 08:04
    mandar2812 closed #301
  • Mar 17 08:04

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 17 08:03

    dependabot[bot] on npm_and_yarn

    (compare)

Mandar Chandorkar
@mandar2812
So you can extend the GPRegression class and override the energy() method to compute the expected Improvement Acquisition
def energy(h: Map[String, Double],
             options: Map[String, String] = Map()): Double
here h is some particular value of hyper-parameters, options can be used to store other configuration items, (not needed in many cases)
Mandar Chandorkar
@mandar2812
If you want your algorithm to use gradient information then your model must extend GloballyOptimizableWithGrad and your model must implement the gradEnergy() method also.
def gradEnergy(h: Map[String, Double])
  : Map[String, Double] = Map()
Hope these is not too many details too soon! Let me know if you have any more questions.
Mandar Chandorkar
@mandar2812
Bayes Optimization was a idea I was interested in some time ago, but I never got the time to implement it, it would be great if you are interested to work on it.
Michel Lemay
@michellemay
Thanks for the details.. I was thinking about the same.
In bayesOpt, we have a multi step process: 1) sample the target function, 2) train GPRegression, 3) optimize the acquisition function, 4) evaluate target function at best point in 3 , 5) loop until convergence or max steps.
One thing I want to clarify, GlobalOptimizer is strongly dependent on energy landscape and this is a grid search approach.
How would you break that assumption?
Mandar Chandorkar
@mandar2812

Okay, I guess I was a bit confused. But your description helped to clear my thinking!

So you want to optimise f(x), by sampling it at some points Seq[I], then training a AbstractGPRegressionModel[I] on these points and using that model to propose new areas in the x:I domain

Michel Lemay
@michellemay
Another thing, the relation between energy(h) and training data DenseVectors is not quite clear. Is it only a direct mapping of the keys of the map into indices of the dense vector ?
Mandar Chandorkar
@mandar2812
Now I am thinking that you might not need to extend the GlobalOptimizer trait
I think your implementation can look as follows BayesOptimization[I](f: I => Double)
internally it can train a AbstractGPRegressionModel[I] on the samples xs: Seq[I]
and use the predictiveDistribution() method of GP to figure out which new samples are promising
so here f:I => Double is the potentially unknown black box function to optimize
Michel Lemay
@michellemay
This is pretty much what I've prototyped so far..
Mandar Chandorkar
@mandar2812
sounds good to me!
so the type I can be anything really, but DenseVector[Double] would be one of the common use cases
If you think your prototype is ready, you can submit a pull request into DynaML dynaml.optimization package
I would love to review it and merge if all goes well!
Michel Lemay
@michellemay
it's not ready at all.. It's just some piece of code in the shell ;)
Mandar Chandorkar
@mandar2812
Okay, no problem :)
Michel Lemay
@michellemay
there are lots of details to iron out. for starter, the optimization of the acquisition function could be done with lbfgsb and I don't see that in your library. maybe I can resort to regular gradient search.
also, I need to define clear bounds and constraints in the input parameter ranges
in my test implementation, i was using predictionWithErrorBars to get the mean and stddev at test points. That might be somewhat inefficient to call that for a single data point at each step of (3)
Mandar Chandorkar
@mandar2812

in my test implementation, i was using predictionWithErrorBars to get the mean and stddev at test points. That might be somewhat inefficient to call that for a single data point at each step of (3)

Yes I agree, the predictionWithErrorBars accepts a Seq of inputs, so it might make more sense to compute them on a batch of points every few iterations or something?

Mandar Chandorkar
@mandar2812
Hi @/all , so two major announcements :
  • DynaML v1.5.3 onward is now available on maven central repository
  • DynaML v2.0-SNAPSHOT will be compatible with scala 2.11.x and scala 2.12.x with cross building enabled in PR #81
Anastasios
@hellene
hi fellas, are we dead in here?
Anastasios
@hellene
I seemed to be doing OK with the Dynaml "demo", until the middle showed some spurious repeats and what have you, not sure what is going on, pasting

val trainData =
tf_dataset.training_data
trainData: ops.io.data.Dataset[(Tensor, Tensor), (tensorDataHelper.OutputType, tensorDataHelper.OutputType), (tensorDataHelper.DataTypes, tensorDataHelper.DataTypes), (tensorDataHelper.Shapes, tensorDataHelper.Shapes)] = ZipDataset(
TensorSlicesDataset(UINT8[50000, 32, 32, 3], "TensorSlicesDataset"),
TensorSlicesDataset(UINT8[50000], "TensorSlicesDataset"),
"Zip"
)

DynaML> .repeat()
SyntaxError: found ".repeat()", expected Import | Prelude ~ BlockDef | Expr | End at index 4
.repeat()
^

DynaML> .shuffle(10000)
SyntaxError: found ".shuffle(10000)", expected Import | Prelude ~ BlockDef | Expr | End at index 4
.shuffle(10000)
^

DynaML> .batch(128)
SyntaxError: found ".batch(128)", expected Import | Prelude ~ BlockDef | Expr | End at index 4
.batch(128)
^

DynaML> .prefetch(10)

Mandar Chandorkar
@mandar2812
Hi @hellene you can enclose the entire code in {} before pasting in the terminal
Anastasios
@hellene
you mean to improve formating here, or to avoid errors during execution?
Mandar Chandorkar
@mandar2812
yes Its a problem with formatting when pasting multiline commands in the shell
Anastasios
@hellene
lol I would never have guessed, can't check right now but hope it is the answer!
Mandar Chandorkar
@mandar2812
Also something seems wrong in the first few lines as well
val trainData =
tf_dataset.training_data
trainData: ops.io.data.Dataset[(Tensor, Tensor), (tensorDataHelper.OutputType, tensorDataHelper.OutputType), (tensorDataHelper.DataTypes, tensorDataHelper.DataTypes), (tensorDataHelper.Shapes, tensorDataHelper.Shapes)] = ZipDataset(
TensorSlicesDataset(UINT8[50000, 32, 32, 3], "TensorSlicesDataset"),
TensorSlicesDataset(UINT8[50000], "TensorSlicesDataset"),
"Zip"
)
I dont think what you pasted here has the complete code dump, it seems to be combination of your code and the terminals response
Anastasios
@hellene
I will redo the whole thing in one liners, first of all
like I said, half of the copy and pastes went fine