Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Oct 20 2020 02:27
    yugyesh commented #723
  • Sep 27 2020 08:58
    mlwp3 opened #949
  • Aug 11 2020 07:42
    mafattma commented #948
  • Jul 05 2020 16:16
    PhatBoy44 opened #948
  • Jun 10 2020 06:44
    wuziniu opened #947
  • May 29 2020 06:36
    hyeon424 closed #946
  • Apr 08 2020 09:19
    hyeon424 commented #943
  • Apr 08 2020 09:14
    jtlz2 commented #943
  • Mar 12 2020 07:33
    hyeon424 commented #944
  • Mar 12 2020 07:23
    hyeon424 edited #946
  • Mar 12 2020 07:22
    hyeon424 commented #942
  • Mar 12 2020 07:21
    hyeon424 commented #942
  • Mar 12 2020 07:19
    hyeon424 commented #943
  • Mar 12 2020 07:19
    hyeon424 commented #943
  • Mar 12 2020 07:18
    hyeon424 commented #943
  • Mar 12 2020 07:14
    hyeon424 opened #946
  • Feb 24 2020 14:59
    RoyiAvital commented #464
  • Jan 29 2020 13:54
    ursb2017 commented #945
  • Jan 29 2020 13:53
    ursb2017 commented #945
  • Jan 29 2020 13:52
    ursb2017 opened #945
Dmitriy Voronin
Latent Truth Model
If the Latent bernoulli variable is true, use the False Positive Rate beta var, if the latent variable is false, sample from respective Sensitivity beta
Thank you for your time, Mr. Scholak! Spacibo
Torsten Scholak
sounds like a mixture to me
like Figure 1?
Dmitriy Voronin
Exact paper I'm implementing. Haha!
Torsten Scholak
I don’t know right now if this can be done in Edward
Dmitriy Voronin
I have it working in PyMC3, without the collapsed gibbs sampling but using NUTS and BinGibbsMetropolis
Torsten Scholak
I have to go now, but I can give this some more thought later
Dmitriy Voronin
It was a pleasure getting a moment of your time, Mr. Scholak. Best wishes.
Dmitriy Voronin
Follow up: I don't think I'm able to use tf.where since it evaluates and doesn't wait for inferencing. Next step for me is to try is tf.cond and return from functions the respective dependant distributions. I am able to use theano.tensor.switch for the PyMC3 implementation built on top of theano.
Dmitriy Voronin
I was able to find this in the Edward source code code-link:
Use TensorFlow ops such as tf.cond to execute subgraphs conditioned on a draw from a random variable.
tf.cond to execute subgraphs conditioned on a draw from a random variable.
Evan Krall
this might be more of a TFP question, but hopefully someone here can help: why does this crash? what am I misunderstanding? https://gist.github.com/EvanKrall/fdc4e23e3688c809890d908e70737c9c
Evan Krall
(I had better luck with the AffineScalar bijector)
the issue seems to be that the Affine bijector has a forward_min_event_ndims of 1 even though the distribution it's operating on is a scalar distribution
Kenneth Lu
Hi, I had a question about your paper. You mentioned in the abstract that it's much faster than Stan and PyMC3, is that true for the other PPLs that you cited?
If so, do you have a reference to specific runtime results comparing Edward to the other PPLs ?
Hari M Koduvely

Hi, I am facing an issue while running variational inference using KLqp on an RNN model. Here are the details of my code
def rnn_cell(hprev, x):
return tf.tanh(ed.dot(hprev, Wh) + ed.dot(x, Wx) + bh)

Wx = Normal(loc=tf.zeros([n_i, n_h]), scale=tf.ones([n_i,n_h]))
Wh = Normal(loc=tf.zeros([n_h, n_h]), scale=tf.ones([n_h, n_h]))
Wy = Normal(loc=tf.zeros([n_h, n_o]), scale=tf.ones([n_h, n_o]))
bh = Normal(loc=tf.zeros(n_h), scale=tf.ones(n_h))
by = Normal(loc=tf.zeros(n_o), scale=tf.ones(n_o))

x = tf.placeholder(tf.float32, [None, n_i], name='x')
h = tf.scan(rnn_cell, x, initializer=tf.zeros(n_h))
y = Normal(loc=tf.matmul(h, Wy) + by, scale = 1.0*tf.ones(N))

qWx = Normal(loc=tf.get_variable("qWx/loc", [n_i, n_h]),scale=tf.nn.softplus(tf.get_variable("qWx/scale", [n_i, n_h])))
qWh = Normal(loc=tf.get_variable("qWh/loc", [n_h, n_h]),scale=tf.nn.softplus(tf.get_variable("qWh/scale", [n_h, n_h])))
qWy = Normal(loc=tf.get_variable("qWy/loc", [n_h, n_o]),scale=tf.nn.softplus(tf.get_variable("qWy/scale", [n_h, n_o])))
qbh = Normal(loc=tf.get_variable("qbh/loc", [n_h]),scale=tf.nn.softplus(tf.get_variable("qbh/scale", [n_h])))
qby = Normal(loc=tf.get_variable("qby/loc", [n_o]),scale=tf.nn.softplus(tf.get_variable("qby/scale", [n_o])))

inference = ed.KLqp({Wx:qWx, Wh:qWh, Wy:qWy, bh:qbh, by:qby}, data={x: x_train, y: y_train})
inference.run(n_iter=1000, n_samples=5)

Getting the following ERROR
/Applications/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in AddWhileContext(self, op, between_op_list, between_ops)
1257 if grad_state is None:
1258 # This is a new while loop so create a grad state for it.
-> 1259 outer_forward_ctxt = forward_ctxt.outer_context
1260 if outer_forward_ctxt:
1261 outer_forward_ctxt = outer_forward_ctxt.GetWhileContext()

AttributeError: 'NoneType' object has no attribute 'outer_context'

I am using Tensorflow 1.7.0 since I faced other issues with using Edward on higher versions. I have seen the same AttributeError reported on stack overflow for other use cases of TensorFlow, but not found a practical solution reported anywhere. Thanks in advance for the help.

Ahmad Salim Al-Sibahi
@dustinvtran Are there any examples of MDN for classification on mnist? I've only seen one example http://edwardlib.org/tutorials/mixture-density-network which is about regression on a toy problem from the original paper.
I am trying to run a simple example of Edward (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py), but with TensorFlow 2. In TF2, there is no Edward, but only Edward 2 (https://www.tensorflow.org/probability/api_docs/python/tfp/edward2). Apparently, KLqp is not defined in TF2., so the call at line 89 of the example (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py#L89) produces the error AttributeError: module 'tensorflow_probability.python.edward2' has no attribute 'KLqp'.
Kev. Noel
is tensorflow 2.0.0 not supported?
Kev. Noel
State of Art model Zoo, cross-platform tensorflow, pytorch, Gluon, .... :