avi-stripe on develop
Generated new Bazel BUILD files… (compare)
areese-stripe on areese_bazel_update
Generated new Bazel BUILD files… (compare)
avibryant on distributeconfig
wip (compare)
avi-stripe on develop
Fix compute bug with batched ob… (compare)
avi-stripe on develop
stack safe Packer (#273) (compare)
avi-stripe on targettest
fix sbcbenchmark (compare)
avi-stripe on targettest
fix bug with batchBits output o… (compare)
avi-stripe on targettest
failing test for fit gamma targ… (compare)
import com.stripe.rainier.compute._
// Create a function of three variables
// Example from my APTS notes (p.42 of main notes):
// https://www.staff.ncl.ac.uk/d.j.wilkinson/teaching/apts-sc/
// which is in fact from Nocedal and Wright (2006)...
val x0 = Real.variable()
val x1 = Real.variable()
val x2 = Real.variable()
val y = ( x0*x1*(x2.sin) + (x0*x1).exp )/x2
println(y)
// evaluate using an evaluator (slow)
val eval = new Evaluator(Map(x0 -> 1.0, x1 -> 2.0, x2 -> math.Pi/2.0))
val ey = eval.toDouble(y)
println(ey)
// compile the function for fast evaluation
val cy = Compiler.default.compile(List(x0, x1, x2), y)
val eyc = cy(Array(1.0, 2.0, math.Pi/2.0)) // fast
println(eyc)
// gradients
println(y.gradient.map(eval.toDouble(_))) // WRONG?! BUG?! *******
// compiled gradients
val cg = Compiler.withGradient("y", y, List(x0, x1, x2))
// have gradient functions, but not actually compiled?!
val cg0 = cg.head // function
val cgt = cg.tail // gradients
println(eval.toDouble(cg0._2)) // slow evaluation?
println(cgt.map(e => eval.toDouble(e._2))) // slow evaluation (but correct)?
// now compile the gradient functions?
val cg0c = Compiler.default.compile(List(x0, x1, x2), cg0._2)
println(cg0c(Array(1.0, 2.0, math.Pi/2.0))) // fast
val cgtc = cgt.map(e => Compiler.default.compile(List(x0, x1, x2), e._2))
println(cgtc.map(_(Array(1.0, 2.0, math.Pi/2.0)))) // fast (and correct)
.gradient
to get the gradient vector of my function of three variables, the first two gradients seem to be switched. But if I compile the gradients, using Compiler.withGradient
, they seem to be correct. The example is easy enough to do by hand, but it's actually a well-known example that I already have notes on, if anyone is interested.
Real.variable()
with new Variable
, and the behaviour is exactly the same.
Compiler.withGradient
to get gradient functions, each of which I then compile again with Compiler.default.compile
to get compiled versions that I can then evaluate efficiently? Somehow this feels like it isn't quite how reverse-mode AD is supposed to work. Surely I shouldn't be evaluating the components of the gradient separately? Shouldn't I get the full gradient vector in one go? Am I missing a function/method somewhere?
Compiler.default.compile(List(x0,x1,x2), Compiler.withGradient("y", y, List(x0, x1, x2))
CompiledFunction
cf
val globalBuf = new Array[Double](cf.numGlobals)
cf.output(input, globalBuf, i)
in order for i=0..3
input
you provide to cf.output
rainier-cats
was removed in stripe/rainier#441 along with rainier-scalacheck
. The PR didn't give much specifics about this removal. I am asking because we use them in our project and am happy to help with maintaining the cats module if the burden of which is the main reason for its removal. Thanks!
latent
(née param
) in places that are inappopriate, like during posterior prediction where we've already done the sampling and so the prior will be ignored. But that can still cause runtime errors. And the trade-off is more than worth it IMO.
One thing RandomVariable
provides was the ability to generalize the composition of Models and their variables, which is useful when writing generalized libraries that is somewhat model agnostic. Is there a replacement for supporting such generalized composition?
To better illustrate my thinking, It’s tempting for me, based on my own use cases, to write a replacement of RandomVariable
as something like
case class RandomVariable[A](v: A, model: Model) {
def mapWith[B, T](that: RandomVariable[B])(f: (A, B) => T): RandomVariable[T] =
RandomVariable(f(v, that.v), model.merge(that.model)
}
Would such a thing make sense with the new design?
Model
is, deliberately, a little bit lower level. But I'm hoping that you'll find it easier and more flexible to define those now. The problem we were having before had to do with the (mandatory) stack of RandomVariable[Generator[T]]. It's quite awkward to force people into this kind of monad transformer stack. I'd rather see what kinds of abstractions (monadic or otherwise) people come up with for a bit; maybe down the line we can officially upstream one.
RandomVariable
. I am going to play with some constructs in my projects and see if anything turns out worth upstreaming.