- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- Dec 11 2019 08:42Travis bayespy/bayespy (0.5.19) errored (417)
- Dec 11 2019 08:42Travis bayespy/bayespy (master) errored (415)
- Dec 11 2019 08:41Travis bayespy/bayespy (0.5.18) errored (416)
- Dec 11 2019 08:32Travis bayespy/bayespy (develop) errored (418)
- Oct 03 2019 08:09Travis bayespy/bayespy (develop) errored (414)
- Oct 02 2019 12:22Travis bayespy/bayespy (develop) errored (413)
- Aug 16 2019 18:50Travis bayespy/bayespy (develop) errored (412)
- Aug 16 2019 18:38Travis volpatto/bayespy (add-conda-install) errored (4)
- Aug 16 2019 18:38Travis bayespy/bayespy#116 errored (411)
- May 15 2019 00:36Travis bayespy/bayespy#116 still failing (410)
- May 15 2019 00:02Travis volpatto/bayespy (add-conda-install) failed (3)
- May 14 2019 23:57Travis volpatto/bayespy (add-conda-install) failed (2)
- May 14 2019 23:37Travis volpatto/bayespy (add-conda-install) failed (1)
- Jan 30 2019 15:28danieltomasz starred bayespy/bayespy
- Jan 29 2019 02:20KeplerC starred bayespy/bayespy
- Jan 26 2019 08:47wasanii starred bayespy/bayespy
- Jan 18 2019 00:28
- Jan 17 2019 22:56raamana starred bayespy/bayespy
- Jan 07 2019 10:08Travis bayespy/bayespy (master) still failing (408)
- Jan 07 2019 10:07Travis bayespy/bayespy (develop) still failing (409)

s/probabilities/probabilities or some other variables that are positive and sum to one/

my problem was more related to bayespy, like assume i choose a dirichlet observation node Y. Y has 2 states, Normal and High. As above, any evidence will belong to the array [0, 5] and the fuzzy rules may lead to an output saying 80% activation of high and 20%.... My doubt is how to define the variable 'activity', the one to be observed for inference in Y.observe(activity), using this fuzzy ouput

def fitted_gaussian(N, n_krnl, D, covariance = 'full'):

```
# Input:
# N = Number of data vectors
# D = Dimensionality
# n_krnl = Number of kernels
# Prior
P = nds.Dirichlet(1e-5*np.ones(n_krnl), name='P')
# N n_krnl-dimensional cluster (for the data)
I = nds.Categorical(P, plates=(N,), name='I')
# n_krnl D-dimensional components means
if covariance == 'full':
# n_krnl D-dim component covariance
mu = nds.Gaussian(np.zeros(D), 1e-5*np.identity(D), plates = (n_krnl,), name = 'mu')
Lambda = nds.Wishart(D, 1e-5*np.identity(D), plates = (n_krnl,), name = 'Lambda')
Y = nds.Mixture(I, nds.Gaussian, mu, Lambda, plates = (N,), name = 'Y')
else:
print('diagonal')
# inverse variances
mu = nds.GaussianARD(np.zeros(D), 1e-5*np.identity(D), shape = (D,), plates = (n_krnl,), name = 'mu')
Lambda = nds.Gamma(1e-3, 1e-3, plates = (n_krnl, D), name = 'Lambda')
Y = nds.Mixture(I, nds.GaussianARD, mu, Lambda, plates = (N,), name = 'Y')
I.initialize_from_random()
return VB(Y, mu, Lambda, I, P)
```

the problem is that I get this when I apply it with the diagonal; ValueError: The plates (2,) of the parents are not broadcastable to the given plates (10,).

How can I fix this?

Don_Chili_twitter: can't really test now but i guess the issue is that GaussianARD makes scalar gaussian variables by default. in principle, you should be able to fix it by giving ndim=1 to Mixture but i think it doesn't work atm. alternatively, you can create a diagonal matrix from Lambda by Lambda.as_diagonal_wishart() and using nds.Gaussian instead of nds.GaussianARD in

Mixture

that is: Y = nds.Mixture(I, nds.Gaussian, mu, Lambda.as_diagonal_wishart(), plates=(N,), name='Y')

couldn't test though

Hi @jluttine ! Thank you for your very fast response: n_krnl should be 10. I already tried your recommendation but there is something weird about the plates, although I already tried to make the parent plates the same dimension as the given plates but it never match......ValueError: The plates (2,) of the parents are not broadcastable to the given plates (10,).

Hi Jaako, can I ask you for a quick review on a text?

section 2

vb learning can converge to bad local minima. better initialization or better learning algorithm may help. for instance, deterministic annealing. also, sometimes changing the model might help. for instance, if one factors q(mu)q(Lambda), then re-formulating so that one gets q(mu,Lambda) might improve the posterior accuracy and the learning

for the case of the constructing the t for the mixture

not sure what you mean, but the mixture in a mixture model is a different mixture than in student t construction. student t construction is based on an infinite mixture. it's basically just a particular gaussian-gamma joint distribution with the gamma distribution marginalized. but in vb approach, one doesn't marginalize the gamma analytically in order to keep the equations in

the exponential family form

i should write an example

some day

I'm in a hurry for the release, so i'll postpone it

an example would be great :)

thanks a lot, Jaakko!