Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Sep 28 00:33
    WasinV commented #951
  • Sep 27 14:27
    twiecki commented #951
  • Sep 27 06:51
    WasinV opened #951
  • Feb 28 02:41
    harishkashyap commented #940
  • Feb 28 02:41
    harishkashyap commented #940
  • Nov 15 2021 10:31
    OlaRonning commented #372
  • Sep 26 2021 15:14
    ssbanerje closed #857
  • Jul 16 2021 10:08
    cooLBooy1128 commented #723
  • May 11 2021 18:40
    dustinvtran commented #950
  • May 06 2021 20:11
    raquelaoki commented #945
  • May 02 2021 13:03
    philipperemy commented #949
  • May 02 2021 12:39
    GKDGKD commented #949
  • Apr 20 2021 06:00
    JhyOnya edited #950
  • Apr 20 2021 05:58
    JhyOnya edited #950
  • Apr 14 2021 02:44
    JhyOnya opened #950
  • Feb 27 2021 13:47
    philipperemy commented #949
  • Feb 25 2021 05:00
    murph-1999 commented #723
  • Feb 25 2021 04:54
    murph-1999 commented #723
  • Feb 24 2021 07:43
    shrikantpardhi commented #723
  • Feb 24 2021 06:59
    shrikantpardhi commented #723
matthieu bulté
Oh, sorry I forgot to mention that I'm just trying to implement the IS part of the ticket. Implementing SMC straight away would be a little too much for my first contribution ;)
Evan Krall
looks like the Independent distribution is what I'm looking for, though it seems to require that everything is identically distributed too
Matthew Feickert
Hi. In Edwaard there is ed.models.Empirical. Does anyone know what the corresponding thing in Edward2 or TensorFlow Probability is? Maybe as_random_variable?
Matthew Feickert
@cruyffturn I already read that, but it wasn't entirely clear to me why we would want to use softplus though. I can reread this though, so thanks for taking time to respond.

I have a problem where I have an architecture vaguely similar to an auto-encoder, but I want the encoder to be probabilistic.

I think I need this because the loss function I'm optimizing has a few 'hot spots' (good initial conditions) and many very 'cold spots' (zero gradient).

So, if I treat the output as something deterministic, just by the luck of initialization very few (maybe zero) of the encoder outputs may be hot spots. But if they are treated as a distribution with enough variance to cover the hot spots, I should be able to sample good encodings to find a good trajectory to optimize (as well as push the encoder distribution further towards hot spots during training).

Does this sound suited for probabilistic programming, and does anyone have any advice based on this description?

so in essence I would like to treat the output of my encoder as parameters of a distribution, and calculate my loss based on (potentially many) samples from that distribution
when i import edward ImportError: cannot import name 'set_shapes_for_outputs'
when i import edward ImportError: cannot import name 'set_shapes_for_outputs'
Ahmet Can Acar

Greetings, I m trying to use Multinomial Distribution in Edward to predict multiclass labels ( 3 class ) with neural network. I m confused about :

1-) how should i design label dataset as shape. I choose to way that converting label as [[0],[1],[0],[2]] to [[1,0,0],[0,1,0],[1,0,0],[0,0,2] ] .
2-) what conditions for my total_counts arg in multinomial function not being equal 1 when i use probs instead logits?

I m not too familiar with multinomial actually i used bernoulli easily but i cant handle multiclass network:/

After training my data i got predictions from test data.But when i try evaluate mse i m getting error:
ValueError: Dimensions must be equal, but are 145 and 3 for 'sub_2' (op: 'Sub') with input shapes: [145], [145,3].
Here my code :
and if u see some missing parts of me i m very happy to get advise.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
import edward as ed
from edward.models import Normal, Multinomial

num_labels = 3
(n_samples, n_iter) = (30, 2500)
symbol = 'A'
dataFrequency = '10'

X, Y = np.array(X), np.array(Y)
Y =(np.arange(num_labels) == Y[:,None]).astype(np.float32)

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)

# X_train.shape  (578, 120)
# y_train.shape  (578, 3)
# X_test.shape  (145, 120)
# y_test.shape  (145, 3)

def neural_network(x):
    h = tf.tanh(tf.matmul(x, W_0) + b_0)
    h = tf.tanh(tf.matmul(h, W_1) + b_1)
    h = tf.tanh(tf.matmul(h, W_2) + b_2)
    h = tf.matmul(h, W_3) + b_3
    nn_result = tf.nn.softmax(h)
    return nn_result

D = X_train.shape[1]
N = X_train.shape[0]
N2 = X_test.shape[0]

W_0 = Normal(loc=tf.zeros([D, 10]), scale=tf.ones([D, 10]))
W_1 = Normal(loc=tf.zeros([10, 10]), scale=tf.ones([10, 10]))
W_2 = Normal(loc=tf.zeros([10, 5]), scale=tf.ones([10, 5]))
W_3 = Normal(loc=tf.zeros([5, 3]), scale=tf.ones([5, 3]))
b_0 = Normal(loc=tf.zeros(10), scale=tf.ones(10))
b_1 = Normal(loc=tf.zeros(10), scale=tf.ones(10))
b_2 = Normal(loc=tf.zeros(5), scale=tf.ones(5))
b_3 = Normal(loc=tf.zeros(3), scale=tf.ones(3))

x_ph = tf.placeholder(tf.float32, [None, D])
y = Multinomial(probs=neural_network(x_ph), total_count=1.)

qw_0 = Normal(loc=tf.get_variable("qw_0/loc", [D, 10]),
              scale=tf.nn.softplus(tf.get_variable("qw_0/scale", [D, 10])))
qb_0 = Normal(loc=tf.get_variable("qb_0/loc", [10]),
              scale=tf.nn.softplus(tf.get_variable("qb_0/scale", [10])))
qw_1 = Normal(loc=tf.get_variable("qw_1/loc", [10, 10]),
              scale=tf.nn.softplus(tf.get_variable("qw_1/scale", [10, 10])))
qb_1 = Normal(loc=tf.get_variable("qb_1/loc", [10]),
              scale=tf.nn.softplus(tf.get_variable("qb_1/scale", [10])))
qw_2 = Normal(loc=tf.get_variable("qw_2/loc", [10, 5]),
              scale=tf.nn.softplus(tf.get_variable("qw_2/scale", [10, 5])))
qb_2 = Normal(loc=tf.get_variable("qb_2/loc", [5]),
              scale=tf.nn.softplus(tf.get_variable("qb_2/scale", [5])))
qw_3 = Normal(loc=tf.get_variable("qw_3/loc", [5, 3]),
              scale=tf.nn.softplus(tf.get_variable("qw_3/scale", [5, 3])))
qb_3 = Normal(loc=tf.get_variable("qb_3/loc", [3]),
              scale=tf.nn.softplus(tf.get_variable("qb_3/scale", [3])))

inference = ed.KLqp({
    W_0: qw_0, b_0: qb_0,
    W_1: qw_1, b_1: qb_1,
    W_2: qw_2, b_2: qb_2,
    W_3: qw_3, b_3: qb_3,
}, data={x_ph: X_train, y: y_train})
inference.run(n_samples=n_samples, n_iter=n_iter,

y_post = ed.copy(y, {
    W_0: qw_0, b_0: qb_0,
    W_1: qw_1, b_1: qb_1,
    W_2: qw_2, b_2: qb_2,
    W_3: qw_3, b_3: qb_3,

sess = ed.get_session()
predictions = sess.run(y_post, feed_dict={x_ph: X_test})

print('mse: ', ed.evaluate('mse', data={x_ph: X_test, y: y_test}))
Gaurav Shrivastava
Hi there, I'm not sure this is the right place to ask but can anybody direct me to an example(script) of variational gaussian process or hierarchical variational models.
vincenzoserio Yo ;)
Dmitriy Voronin
Hey guys, checking in from Richmond VA! Anybody have a second to lend an ear?
Dmitriy Voronin
@tscholak Hey, thanks for that great talk last year on Edward. Do you have a moment?
Torsten Scholak
sure, what’s up?
Dmitriy Voronin
I am trying to move my Bayesian Network from a PyMC3 implementation to Edward since Theano isn't able to handle the complexity of the network.
However, I can't seem to find a way to replicate a theano switch statement. Goal, use one distribution over another given the particular value sampled at run-time.
Torsten Scholak
a mixture model with two different base distributions?
Dmitriy Voronin
Latent Truth Model
If the Latent bernoulli variable is true, use the False Positive Rate beta var, if the latent variable is false, sample from respective Sensitivity beta
Thank you for your time, Mr. Scholak! Spacibo
Torsten Scholak
sounds like a mixture to me
like Figure 1?
Dmitriy Voronin
Exact paper I'm implementing. Haha!
Torsten Scholak
I don’t know right now if this can be done in Edward
Dmitriy Voronin
I have it working in PyMC3, without the collapsed gibbs sampling but using NUTS and BinGibbsMetropolis
Torsten Scholak
I have to go now, but I can give this some more thought later
Dmitriy Voronin
It was a pleasure getting a moment of your time, Mr. Scholak. Best wishes.
Dmitriy Voronin
Follow up: I don't think I'm able to use tf.where since it evaluates and doesn't wait for inferencing. Next step for me is to try is tf.cond and return from functions the respective dependant distributions. I am able to use theano.tensor.switch for the PyMC3 implementation built on top of theano.
Dmitriy Voronin
I was able to find this in the Edward source code code-link:
Use TensorFlow ops such as tf.cond to execute subgraphs conditioned on a draw from a random variable.
tf.cond to execute subgraphs conditioned on a draw from a random variable.
Evan Krall
this might be more of a TFP question, but hopefully someone here can help: why does this crash? what am I misunderstanding? https://gist.github.com/EvanKrall/fdc4e23e3688c809890d908e70737c9c
Evan Krall
(I had better luck with the AffineScalar bijector)
the issue seems to be that the Affine bijector has a forward_min_event_ndims of 1 even though the distribution it's operating on is a scalar distribution
Kenneth Lu
Hi, I had a question about your paper. You mentioned in the abstract that it's much faster than Stan and PyMC3, is that true for the other PPLs that you cited?
If so, do you have a reference to specific runtime results comparing Edward to the other PPLs ?
Hari M Koduvely

Hi, I am facing an issue while running variational inference using KLqp on an RNN model. Here are the details of my code
def rnn_cell(hprev, x):
return tf.tanh(ed.dot(hprev, Wh) + ed.dot(x, Wx) + bh)

Wx = Normal(loc=tf.zeros([n_i, n_h]), scale=tf.ones([n_i,n_h]))
Wh = Normal(loc=tf.zeros([n_h, n_h]), scale=tf.ones([n_h, n_h]))
Wy = Normal(loc=tf.zeros([n_h, n_o]), scale=tf.ones([n_h, n_o]))
bh = Normal(loc=tf.zeros(n_h), scale=tf.ones(n_h))
by = Normal(loc=tf.zeros(n_o), scale=tf.ones(n_o))

x = tf.placeholder(tf.float32, [None, n_i], name='x')
h = tf.scan(rnn_cell, x, initializer=tf.zeros(n_h))
y = Normal(loc=tf.matmul(h, Wy) + by, scale = 1.0*tf.ones(N))

qWx = Normal(loc=tf.get_variable("qWx/loc", [n_i, n_h]),scale=tf.nn.softplus(tf.get_variable("qWx/scale", [n_i, n_h])))
qWh = Normal(loc=tf.get_variable("qWh/loc", [n_h, n_h]),scale=tf.nn.softplus(tf.get_variable("qWh/scale", [n_h, n_h])))
qWy = Normal(loc=tf.get_variable("qWy/loc", [n_h, n_o]),scale=tf.nn.softplus(tf.get_variable("qWy/scale", [n_h, n_o])))
qbh = Normal(loc=tf.get_variable("qbh/loc", [n_h]),scale=tf.nn.softplus(tf.get_variable("qbh/scale", [n_h])))
qby = Normal(loc=tf.get_variable("qby/loc", [n_o]),scale=tf.nn.softplus(tf.get_variable("qby/scale", [n_o])))

inference = ed.KLqp({Wx:qWx, Wh:qWh, Wy:qWy, bh:qbh, by:qby}, data={x: x_train, y: y_train})
inference.run(n_iter=1000, n_samples=5)

Getting the following ERROR
/Applications/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in AddWhileContext(self, op, between_op_list, between_ops)
1257 if grad_state is None:
1258 # This is a new while loop so create a grad state for it.
-> 1259 outer_forward_ctxt = forward_ctxt.outer_context
1260 if outer_forward_ctxt:
1261 outer_forward_ctxt = outer_forward_ctxt.GetWhileContext()

AttributeError: 'NoneType' object has no attribute 'outer_context'

I am using Tensorflow 1.7.0 since I faced other issues with using Edward on higher versions. I have seen the same AttributeError reported on stack overflow for other use cases of TensorFlow, but not found a practical solution reported anywhere. Thanks in advance for the help.

Ahmad Salim Al-Sibahi
@dustinvtran Are there any examples of MDN for classification on mnist? I've only seen one example http://edwardlib.org/tutorials/mixture-density-network which is about regression on a toy problem from the original paper.
I am trying to run a simple example of Edward (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py), but with TensorFlow 2. In TF2, there is no Edward, but only Edward 2 (https://www.tensorflow.org/probability/api_docs/python/tfp/edward2). Apparently, KLqp is not defined in TF2., so the call at line 89 of the example (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py#L89) produces the error AttributeError: module 'tensorflow_probability.python.edward2' has no attribute 'KLqp'.
Kev. Noel
is tensorflow 2.0.0 not supported?
Kev. Noel
State of Art model Zoo, cross-platform tensorflow, pytorch, Gluon, .... :
can someone help me how to resolve error module not found "edward.stats" i'm following this link http://cbonnett.github.io/MDN_EDWARD_KERAS_TF.html
Anshupriya Srivastava

Hi, I am trying to use batch training - http://edwardlib.org/tutorials/batch-training.
However, the package edward if failing to install due to this error - /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/edward/models/dirichlet_process.py in <module>
7 from edward.models.random_variable import RandomVariable
----> 8 from tensorflow.contrib.distributions import Distribution
10 try:

ModuleNotFoundError: No module named 'tensorflow.contrib'

Screen Shot 2021-11-04 at 2.13.15 PM.png
Any suggestions