Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 04:41

    lucianopaz on ricardoV94-patch-1

    (compare)

  • 04:41

    lucianopaz on main

    Fix ReleaseNotes (compare)

  • 04:41
    lucianopaz closed #5802
  • May 24 20:51
    ricardoV94 milestoned #5799
  • May 24 20:46
    ricardoV94 opened #5802
  • May 24 20:45

    ricardoV94 on ricardoV94-patch-1

    Fix ReleaseNotes (compare)

  • May 24 20:34
    ricardoV94 commented #5787
  • May 24 20:33

    ricardoV94 on main

    Refactor RandomState/Generator … Fix default use of global seedi… Update expected behavior of `sa… and 3 more (compare)

  • May 24 20:33
    ricardoV94 closed #5787
  • May 24 20:33
    ricardoV94 closed #4301
  • May 24 20:33
    ricardoV94 closed #5733
  • May 24 20:33
    ricardoV94 closed #5784
  • May 24 20:33
    ricardoV94 closed #5785
  • May 24 20:30
    ricardoV94 commented #5787
  • May 24 20:01
    lucianopaz commented #5787
  • May 24 19:32
    ricardoV94 labeled #5797
  • May 24 19:32
    ricardoV94 unlabeled #5797
  • May 24 19:31
    ricardoV94 locked #5798
  • May 24 15:12
    larryshamalama commented #5795
  • May 24 14:30
    purna135 commented #5642
Maxim Kochurov
@ferrine
yes and pass appropriate hyperparams to them
logp is computed elemwise so hyper parameters should match one to one in rows (shape[0])
then for likelihood you can do the thing as in notebook radon_est = a[county_idx] + b[county_idx] * data.floor.values
concat all data and remember group id
Maxim Kochurov
@ferrine
I see you don't use hyperparams that depend on group. So this case is right the same as Hierarchical GLM in implementation
 Z = pm.Categorical(‘Z’,  tt.stack([1.0-w1,w1]), shape=n_groups)
 MU = pm.Normal(‘mu’, H1_mu[Z], H1_precision[Z], shape=n_groups)
 ALPHA = pm.Gamma('alpha', alpha=H1_alpha[Z], beta=H1_beta[Z], shape=n_groups)
loop can be rewritten in this way
Florf
@omenrust_twitter
Thanks! Not ignoring you by the way, was reading the notebook :) it helps alot. I think I can get it from here
Also, is there a way to specify starting values? For pymc2 I think it was value=xxx but doesn't seem to work for v3
Junpeng Lao
@junpenglao
you can provide starting value by adding testval=.5 etc, for example MU = pm.Normal(‘mu’, H1_mu[Z], H1_precision[Z], shape=n_groups, testval=np.ones(n_groups))
Florf
@omenrust_twitter
thanks!
Nikos Koudounas
@aplamhden
Hello,I would like to know if sample_ppc() is working with the new ADVI interface?
Maxim Kochurov
@ferrine
Yes, you should create a variational trace with pm.sample_approx
Nikos Koudounas
@aplamhden
I create my trace with this: trace=approx.sample(500) and then ppc = pm.sample_ppc(trace, model=basic_model, samples=500, progressbar=False) then i have an error : TypeError: object of type 'TensorVariable' has no len()
Maxim Kochurov
@ferrine
What is full traceback?
Nikos Koudounas
@aplamhden
what u mean with full traceback?
Maxim Kochurov
@ferrine
just copy-paste of full error
with functions calls
so I can see where is this error
Nikos Koudounas
@aplamhden

Yeah i saw the term Traceback and i got what u mean. Here is the error ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)

<ipython-input-51-67616625ebc1> in <module>()
1 ann_input.set_value(X_test)
2 ann_output.set_value(Y_test)
----> 3 ppc = pm.sample_ppc(trace, model=basic_model, samples=500, progressbar=False)
4
5 # Use probability of > 0.5 to assume prediction of class 1

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\sampling.py in sample_ppc(trace, samples, model, vars, size, random_seed, progressbar)
526 for var in vars:
527 ppc[var.name].append(var.distribution.random(point=param,
--> 528 size=size))
529
530 return {k: np.asarray(v) for k, v in ppc.items()}

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\continuous.py in random(self, point, size, repeat)
219 def random(self, point=None, size=None, repeat=None):
220 mu, tau, _ = draw_values([self.mu, self.tau, self.sd],
--> 221 point=point)
222 return generate_samples(stats.norm.rvs, loc=mu, scale=tau**-0.5,
223 dist_shape=self.shape,

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_values(params, point)
183 if not isinstance(node, (tt.sharedvar.TensorSharedVariable,
184 tt.TensorConstant)):
--> 185 givens[name] = (node, drawvalue(node, point=point))
186 values = [None for
in params]
187 for i, param in enumerate(params):

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_value(param, point, givens)
251 except:
252 shape = param.shape
--> 253 if len(shape) == 0 and len(value) == 1:
254 value = value[0]
255 return value

TypeError: object of type 'TensorVariable' has no len()

Maxim Kochurov
@ferrine
Seems that it’s not ADVI failuture
Something is wrong in draw values
Do you reproduce the error with non-advi trace?
Nikos Koudounas
@aplamhden
i had the same error while i was trying to run the code from the original ipynb https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/bayesian_neural_network_opvi-advi.ipynb Yes i had the same error with NUTS and Metropolis
Maxim Kochurov
@ferrine
Try passing include_transformed=True to approx.sample
I remember some changes there
Nikos Koudounas
@aplamhden
The same error again.
Maxim Kochurov
@ferrine
So opening an issue is the only way now.
We’ll try to solve the problem before release
If it is possible, could you please provide minimal failing example?
Nikos Koudounas
@aplamhden
Sure, i am doing it atm.
Nikos Koudounas
@aplamhden
I opened an issue. Thnx for help @ferrine
Rémi Louf
@rlouf
Hi, I am new to PyMC3 and find the project amazing! I was wondering if there were ways someone like me could start contributing to the project? I would like to become more fluent in Python and learn more about probabilistic programming at the same time :smile:
Junpeng Lao
@junpenglao
hi @rlouf , welcome! I think a great starting point is to run the notebooks in https://github.com/pymc-devs/pymc3/tree/master/docs/source/notebooks and try to fix/report the errors - we always need hands to maintain the notebooks as we are doing it mostly by hand currently. And our API changes quite a bit in the pass few months so something might be broken and we have not realized yet.
I think this is a great way also to learn python and bayesian statistics at the same time.
There are also notebooks need some explanations and background information, for example https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/cox_model.ipynb which would be a great starting point for contribution as well.
Rémi Louf
@rlouf
Hi @junpenglao ! Thank you for the pointers, it indeed looks like an interesting way to get to know PyMC3 and learn a bit more about bayesian statistics. I'll come back when my homework is done :) I might also add a comprehensive notebook on bayesian A/B testing (with hierarchical models and all)--unless you feel there is no need for it.
Junpeng Lao
@junpenglao
It would be a great addition! ;-)
We just set up our discourse forum, for longer discussion please move to https://discourse.pymc.io/
Thomas Wiecki
@twiecki
:+1:
samuelklee
@samuelklee
Hi all! I've been using the ADVI in PyMC3 to fit a Poisson latent Gaussian model with ARD. In testing on simulated data, I've gotten good results with the old ADVI interface (in that the number of simulated relevant components is correctly recovered), but switching over to the new ADVI interface sometimes gives me inconsistent results. Should I expect these results to be roughly equivalent if I set obj_optimizer=pm.adagrad_window(learning_rate), or are there other parameters I need to set appropriately?
Maxim Kochurov
@ferrine
Hi
Maxim Kochurov
@ferrine
Could you please post the issue to our discourse?
I would recommend checking loss convergence first. Did it converge?
samuelklee
@samuelklee
Yes, convergence looks reasonable. I'll try to put together a discourse post later today. Just curious if I should expect the underlying ADVI implementation to have changed, as I haven't looked at the relevant code in detail yet.
Junpeng Lao
@junpenglao
hey guys, we are thinking about closing the Gitter channel and move everything to Discourse, as it is easier for discussion and search. What do you think?
Osvaldo Martin
@aloctavodia
@junpenglao I think the lack of response for the last two days is a very clear answer ;-)
Great! I see you have already done a PR
Junpeng Lao
@junpenglao
The gitter channel will remain open for light conversation ;-)
Paddy Horan
@paddyhoran
Does anyone have an example of multidimensional input in Gaussian process regression? So x shape = (num records, p) and y shape = (num records) where p > 1. Thanks