Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 20 2020 02:27
    yugyesh commented #723
  • Sep 27 2020 08:58
    mlwp3 opened #949
  • Aug 11 2020 07:42
    mafattma commented #948
  • Jul 05 2020 16:16
    PhatBoy44 opened #948
  • Jun 10 2020 06:44
    wuziniu opened #947
  • May 29 2020 06:36
    hyeon424 closed #946
  • Apr 08 2020 09:19
    hyeon424 commented #943
  • Apr 08 2020 09:14
    jtlz2 commented #943
  • Mar 12 2020 07:33
    hyeon424 commented #944
  • Mar 12 2020 07:23
    hyeon424 edited #946
  • Mar 12 2020 07:22
    hyeon424 commented #942
  • Mar 12 2020 07:21
    hyeon424 commented #942
  • Mar 12 2020 07:19
    hyeon424 commented #943
  • Mar 12 2020 07:19
    hyeon424 commented #943
  • Mar 12 2020 07:18
    hyeon424 commented #943
  • Mar 12 2020 07:14
    hyeon424 opened #946
  • Feb 24 2020 14:59
    RoyiAvital commented #464
  • Jan 29 2020 13:54
    ursb2017 commented #945
  • Jan 29 2020 13:53
    ursb2017 commented #945
  • Jan 29 2020 13:52
    ursb2017 opened #945
liuchenbaidu
@liuchenbaidu
when i import edward ImportError: cannot import name 'set_shapes_for_outputs'
when i import edward ImportError: cannot import name 'set_shapes_for_outputs'
Ahmet Can Acar
@acanacar

Greetings, I m trying to use Multinomial Distribution in Edward to predict multiclass labels ( 3 class ) with neural network. I m confused about :

1-) how should i design label dataset as shape. I choose to way that converting label as [[0],[1],[0],[2]] to [[1,0,0],[0,1,0],[1,0,0],[0,0,2] ] .
2-) what conditions for my total_counts arg in multinomial function not being equal 1 when i use probs instead logits?

I m not too familiar with multinomial actually i used bernoulli easily but i cant handle multiclass network:/

After training my data i got predictions from test data.But when i try evaluate mse i m getting error:
ValueError: Dimensions must be equal, but are 145 and 3 for 'sub_2' (op: 'Sub') with input shapes: [145], [145,3].
Here my code :
and if u see some missing parts of me i m very happy to get advise.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
import edward as ed
from edward.models import Normal, Multinomial

num_labels = 3
(n_samples, n_iter) = (30, 2500)
symbol = 'A'
dataFrequency = '10'


X, Y = np.array(X), np.array(Y)
Y =(np.arange(num_labels) == Y[:,None]).astype(np.float32)

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)

# X_train.shape  (578, 120)
# y_train.shape  (578, 3)
# X_test.shape  (145, 120)
# y_test.shape  (145, 3)

def neural_network(x):
    h = tf.tanh(tf.matmul(x, W_0) + b_0)
    h = tf.tanh(tf.matmul(h, W_1) + b_1)
    h = tf.tanh(tf.matmul(h, W_2) + b_2)
    h = tf.matmul(h, W_3) + b_3
    nn_result = tf.nn.softmax(h)
    return nn_result

D = X_train.shape[1]
N = X_train.shape[0]
N2 = X_test.shape[0]

W_0 = Normal(loc=tf.zeros([D, 10]), scale=tf.ones([D, 10]))
W_1 = Normal(loc=tf.zeros([10, 10]), scale=tf.ones([10, 10]))
W_2 = Normal(loc=tf.zeros([10, 5]), scale=tf.ones([10, 5]))
W_3 = Normal(loc=tf.zeros([5, 3]), scale=tf.ones([5, 3]))
b_0 = Normal(loc=tf.zeros(10), scale=tf.ones(10))
b_1 = Normal(loc=tf.zeros(10), scale=tf.ones(10))
b_2 = Normal(loc=tf.zeros(5), scale=tf.ones(5))
b_3 = Normal(loc=tf.zeros(3), scale=tf.ones(3))

x_ph = tf.placeholder(tf.float32, [None, D])
y = Multinomial(probs=neural_network(x_ph), total_count=1.)

qw_0 = Normal(loc=tf.get_variable("qw_0/loc", [D, 10]),
              scale=tf.nn.softplus(tf.get_variable("qw_0/scale", [D, 10])))
qb_0 = Normal(loc=tf.get_variable("qb_0/loc", [10]),
              scale=tf.nn.softplus(tf.get_variable("qb_0/scale", [10])))
qw_1 = Normal(loc=tf.get_variable("qw_1/loc", [10, 10]),
              scale=tf.nn.softplus(tf.get_variable("qw_1/scale", [10, 10])))
qb_1 = Normal(loc=tf.get_variable("qb_1/loc", [10]),
              scale=tf.nn.softplus(tf.get_variable("qb_1/scale", [10])))
qw_2 = Normal(loc=tf.get_variable("qw_2/loc", [10, 5]),
              scale=tf.nn.softplus(tf.get_variable("qw_2/scale", [10, 5])))
qb_2 = Normal(loc=tf.get_variable("qb_2/loc", [5]),
              scale=tf.nn.softplus(tf.get_variable("qb_2/scale", [5])))
qw_3 = Normal(loc=tf.get_variable("qw_3/loc", [5, 3]),
              scale=tf.nn.softplus(tf.get_variable("qw_3/scale", [5, 3])))
qb_3 = Normal(loc=tf.get_variable("qb_3/loc", [3]),
              scale=tf.nn.softplus(tf.get_variable("qb_3/scale", [3])))

inference = ed.KLqp({
    W_0: qw_0, b_0: qb_0,
    W_1: qw_1, b_1: qb_1,
    W_2: qw_2, b_2: qb_2,
    W_3: qw_3, b_3: qb_3,
}, data={x_ph: X_train, y: y_train})
inference.run(n_samples=n_samples, n_iter=n_iter,
              logdir='log/{}/{}/{}/{}'.format(symbol,
                                              dataFrequency,
                                              n_samples,
                                              n_iter)
              )

y_post = ed.copy(y, {
    W_0: qw_0, b_0: qb_0,
    W_1: qw_1, b_1: qb_1,
    W_2: qw_2, b_2: qb_2,
    W_3: qw_3, b_3: qb_3,
})

sess = ed.get_session()
predictions = sess.run(y_post, feed_dict={x_ph: X_test})

print('mse: ', ed.evaluate('mse', data={x_ph: X_test, y: y_test}))
Gaurav Shrivastava
@the-darklord
Hi there, I'm not sure this is the right place to ask but can anybody direct me to an example(script) of variational gaussian process or hierarchical variational models.
matrixbot
@matrixbot
vincenzoserio Yo ;)
Dmitriy Voronin
@VoroninDA
Hey guys, checking in from Richmond VA! Anybody have a second to lend an ear?
Dmitriy Voronin
@VoroninDA
@tscholak Hey, thanks for that great talk last year on Edward. Do you have a moment?
Torsten Scholak
@tscholak
sure, what’s up?
Dmitriy Voronin
@VoroninDA
I am trying to move my Bayesian Network from a PyMC3 implementation to Edward since Theano isn't able to handle the complexity of the network.
However, I can't seem to find a way to replicate a theano switch statement. Goal, use one distribution over another given the particular value sampled at run-time.
Torsten Scholak
@tscholak
a mixture model with two different base distributions?
Dmitriy Voronin
@VoroninDA
Latent Truth Model
If the Latent bernoulli variable is true, use the False Positive Rate beta var, if the latent variable is false, sample from respective Sensitivity beta
Thank you for your time, Mr. Scholak! Spacibo
Torsten Scholak
@tscholak
sounds like a mixture to me
like Figure 1?
Dmitriy Voronin
@VoroninDA
Exact paper I'm implementing. Haha!
Torsten Scholak
@tscholak
I don’t know right now if this can be done in Edward
Dmitriy Voronin
@VoroninDA
I have it working in PyMC3, without the collapsed gibbs sampling but using NUTS and BinGibbsMetropolis
Torsten Scholak
@tscholak
I have to go now, but I can give this some more thought later
Dmitriy Voronin
@VoroninDA
It was a pleasure getting a moment of your time, Mr. Scholak. Best wishes.
Dmitriy Voronin
@VoroninDA
Follow up: I don't think I'm able to use tf.where since it evaluates and doesn't wait for inferencing. Next step for me is to try is tf.cond and return from functions the respective dependant distributions. I am able to use theano.tensor.switch for the PyMC3 implementation built on top of theano.
Dmitriy Voronin
@VoroninDA
I was able to find this in the Edward source code code-link:
Use TensorFlow ops such as tf.cond to execute subgraphs conditioned on a draw from a random variable.
tf.cond to execute subgraphs conditioned on a draw from a random variable.
Evan Krall
@EvanKrall
this might be more of a TFP question, but hopefully someone here can help: why does this crash? what am I misunderstanding? https://gist.github.com/EvanKrall/fdc4e23e3688c809890d908e70737c9c
Evan Krall
@EvanKrall
(I had better luck with the AffineScalar bijector)
the issue seems to be that the Affine bijector has a forward_min_event_ndims of 1 even though the distribution it's operating on is a scalar distribution
Kenneth Lu
@krlu
Hi, I had a question about your paper. You mentioned in the abstract that it's much faster than Stan and PyMC3, is that true for the other PPLs that you cited?
If so, do you have a reference to specific runtime results comparing Edward to the other PPLs ?
Hari M Koduvely
@harik68

Hi, I am facing an issue while running variational inference using KLqp on an RNN model. Here are the details of my code
def rnn_cell(hprev, x):
return tf.tanh(ed.dot(hprev, Wh) + ed.dot(x, Wx) + bh)

Wx = Normal(loc=tf.zeros([n_i, n_h]), scale=tf.ones([n_i,n_h]))
Wh = Normal(loc=tf.zeros([n_h, n_h]), scale=tf.ones([n_h, n_h]))
Wy = Normal(loc=tf.zeros([n_h, n_o]), scale=tf.ones([n_h, n_o]))
bh = Normal(loc=tf.zeros(n_h), scale=tf.ones(n_h))
by = Normal(loc=tf.zeros(n_o), scale=tf.ones(n_o))

x = tf.placeholder(tf.float32, [None, n_i], name='x')
h = tf.scan(rnn_cell, x, initializer=tf.zeros(n_h))
y = Normal(loc=tf.matmul(h, Wy) + by, scale = 1.0*tf.ones(N))

qWx = Normal(loc=tf.get_variable("qWx/loc", [n_i, n_h]),scale=tf.nn.softplus(tf.get_variable("qWx/scale", [n_i, n_h])))
qWh = Normal(loc=tf.get_variable("qWh/loc", [n_h, n_h]),scale=tf.nn.softplus(tf.get_variable("qWh/scale", [n_h, n_h])))
qWy = Normal(loc=tf.get_variable("qWy/loc", [n_h, n_o]),scale=tf.nn.softplus(tf.get_variable("qWy/scale", [n_h, n_o])))
qbh = Normal(loc=tf.get_variable("qbh/loc", [n_h]),scale=tf.nn.softplus(tf.get_variable("qbh/scale", [n_h])))
qby = Normal(loc=tf.get_variable("qby/loc", [n_o]),scale=tf.nn.softplus(tf.get_variable("qby/scale", [n_o])))

inference = ed.KLqp({Wx:qWx, Wh:qWh, Wy:qWy, bh:qbh, by:qby}, data={x: x_train, y: y_train})
inference.run(n_iter=1000, n_samples=5)

Getting the following ERROR
/Applications/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in AddWhileContext(self, op, between_op_list, between_ops)
1257 if grad_state is None:
1258 # This is a new while loop so create a grad state for it.
-> 1259 outer_forward_ctxt = forward_ctxt.outer_context
1260 if outer_forward_ctxt:
1261 outer_forward_ctxt = outer_forward_ctxt.GetWhileContext()

AttributeError: 'NoneType' object has no attribute 'outer_context'

I am using Tensorflow 1.7.0 since I faced other issues with using Edward on higher versions. I have seen the same AttributeError reported on stack overflow for other use cases of TensorFlow, but not found a practical solution reported anywhere. Thanks in advance for the help.

Ahmad Salim Al-Sibahi
@ahmadsalim
kirk86
@kirk86
@dustinvtran Are there any examples of MDN for classification on mnist? I've only seen one example http://edwardlib.org/tutorials/mixture-density-network which is about regression on a toy problem from the original paper.
nbro
@nbro
Hi
I am trying to run a simple example of Edward (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py), but with TensorFlow 2. In TF2, there is no Edward, but only Edward 2 (https://www.tensorflow.org/probability/api_docs/python/tfp/edward2). Apparently, KLqp is not defined in TF2., so the call at line 89 of the example (https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py#L89) produces the error AttributeError: module 'tensorflow_probability.python.edward2' has no attribute 'KLqp'.
Kev. Noel
@arita37
VariableVasas
@VariableVasasMT
is tensorflow 2.0.0 not supported?
Kev. Noel
@arita37
State of Art model Zoo, cross-platform tensorflow, pytorch, Gluon, .... :
https://github.com/arita37/mlmodels