Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 18:02
    markgoodhead commented #5927
  • 17:01
    ricardoV94 commented #5927
  • 17:00
    ricardoV94 closed #5844
  • 17:00
    markgoodhead edited #5927
  • 17:00
    ricardoV94 commented #5844
  • 16:58
    ricardoV94 commented #5848
  • 16:56
    ricardoV94 unlabeled #5848
  • 16:56
    ricardoV94 labeled #5848
  • 16:55
    ricardoV94 edited #5923
  • 16:55
    ricardoV94 edited #5923
  • 16:53
    ricardoV94 edited #5923
  • 16:51
    markgoodhead opened #5927
  • 02:43
    janisshin commented #5906
  • 02:42
    janisshin commented #5906
  • Jun 24 22:42
    michaelosthege commented #5925
  • Jun 24 22:30
    michaelosthege labeled #5926
  • Jun 24 22:30
    michaelosthege opened #5926
  • Jun 24 22:21
    michaelosthege commented #5119
  • Jun 24 21:15
    codecov[bot] commented #5814
  • Jun 24 21:13
    codecov[bot] commented #5814
Thomas Wiecki
@twiecki
I've seen that a bunch before
Junpeng Lao
@junpenglao
agree with setting adagrad_window as default. At least it match the prior performance.
Maxim Kochurov
@ferrine
Authors also mension that it works when distributions are not very different there
Maxim Kochurov
@ferrine
There was something wrong with my gitter. ^ message is about Cost part grad scale. One previous message did not appear. I said that this coef is for fine tuning purposes and should be zero near optima.
Peter St. John
@pstjohn
Does pm.generator work with masked array values?
mhashemi0873
@mhashemi0873
Hello, I have a conceptual question! Using ADVI, I fit some data time series in order to estimate the model parameter values. Here is my problem: For different runs, with same ELBO value, I obtain different posterior distributions for parameters. This happens because I use advi? if I use NUTS, this should not happen or there are some other reasons such correlation between parameters?
Maxim Kochurov
@ferrine
Generators and minibatches can't work with masked.
Different ADVI solutions can happen because of local optimums. Did you try different optimization setups?
mhashemi0873
@mhashemi0873
@ferrine Thanks for the hint. Yes sounds there are many local optimums. But for different optimums I should obtain different ELBO values no? my problem is by same values for ELBO, I obtain different parameter values.
Maxim Kochurov
@ferrine
Whithout having a look at the model it is diffucult to say what else can go wrong.
I had sime experience with TS. Initial testval matters
But there was the case when you have some unobserved in time
Florf
@omenrust_twitter
Hi, I'm new to PYMC3 and I was wondering if someone could help me figure out why it's running so slow; even before any sampling has started. My model has ~4000 groups (each with their own observations and parameters) and I'm trying to model the groups as a mixture of 2 populations. I have a loop where I create lists with the individual parameters and Likelihoods for all the groups. But even this construction is very slow.
Maxim Kochurov
@ferrine
hi
I think vectorization can help
Florf
@omenrust_twitter
This is a snippet of the code if that helps:https://pastebin.com/D6e7VL73
You can create grouped priors in vector notation
Florf
@omenrust_twitter
So instead of looping for each ~4000 group, I can create priors with shape=~4000 you mean?
Maxim Kochurov
@ferrine
yes and pass appropriate hyperparams to them
logp is computed elemwise so hyper parameters should match one to one in rows (shape[0])
then for likelihood you can do the thing as in notebook radon_est = a[county_idx] + b[county_idx] * data.floor.values
concat all data and remember group id
Maxim Kochurov
@ferrine
I see you don't use hyperparams that depend on group. So this case is right the same as Hierarchical GLM in implementation
 Z = pm.Categorical(‘Z’,  tt.stack([1.0-w1,w1]), shape=n_groups)
 MU = pm.Normal(‘mu’, H1_mu[Z], H1_precision[Z], shape=n_groups)
 ALPHA = pm.Gamma('alpha', alpha=H1_alpha[Z], beta=H1_beta[Z], shape=n_groups)
loop can be rewritten in this way
Florf
@omenrust_twitter
Thanks! Not ignoring you by the way, was reading the notebook :) it helps alot. I think I can get it from here
Also, is there a way to specify starting values? For pymc2 I think it was value=xxx but doesn't seem to work for v3
Junpeng Lao
@junpenglao
you can provide starting value by adding testval=.5 etc, for example MU = pm.Normal(‘mu’, H1_mu[Z], H1_precision[Z], shape=n_groups, testval=np.ones(n_groups))
Florf
@omenrust_twitter
thanks!
Nikos Koudounas
@aplamhden
Hello,I would like to know if sample_ppc() is working with the new ADVI interface?
Maxim Kochurov
@ferrine
Yes, you should create a variational trace with pm.sample_approx
Nikos Koudounas
@aplamhden
I create my trace with this: trace=approx.sample(500) and then ppc = pm.sample_ppc(trace, model=basic_model, samples=500, progressbar=False) then i have an error : TypeError: object of type 'TensorVariable' has no len()
Maxim Kochurov
@ferrine
What is full traceback?
Nikos Koudounas
@aplamhden
what u mean with full traceback?
Maxim Kochurov
@ferrine
just copy-paste of full error
with functions calls
so I can see where is this error
Nikos Koudounas
@aplamhden

Yeah i saw the term Traceback and i got what u mean. Here is the error ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)

<ipython-input-51-67616625ebc1> in <module>()
1 ann_input.set_value(X_test)
2 ann_output.set_value(Y_test)
----> 3 ppc = pm.sample_ppc(trace, model=basic_model, samples=500, progressbar=False)
4
5 # Use probability of > 0.5 to assume prediction of class 1

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\sampling.py in sample_ppc(trace, samples, model, vars, size, random_seed, progressbar)
526 for var in vars:
527 ppc[var.name].append(var.distribution.random(point=param,
--> 528 size=size))
529
530 return {k: np.asarray(v) for k, v in ppc.items()}

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\continuous.py in random(self, point, size, repeat)
219 def random(self, point=None, size=None, repeat=None):
220 mu, tau, _ = draw_values([self.mu, self.tau, self.sd],
--> 221 point=point)
222 return generate_samples(stats.norm.rvs, loc=mu, scale=tau**-0.5,
223 dist_shape=self.shape,

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_values(params, point)
183 if not isinstance(node, (tt.sharedvar.TensorSharedVariable,
184 tt.TensorConstant)):
--> 185 givens[name] = (node, drawvalue(node, point=point))
186 values = [None for
in params]
187 for i, param in enumerate(params):

C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_value(param, point, givens)
251 except:
252 shape = param.shape
--> 253 if len(shape) == 0 and len(value) == 1:
254 value = value[0]
255 return value

TypeError: object of type 'TensorVariable' has no len()

Maxim Kochurov
@ferrine
Seems that it’s not ADVI failuture
Something is wrong in draw values
Do you reproduce the error with non-advi trace?
Nikos Koudounas
@aplamhden
i had the same error while i was trying to run the code from the original ipynb https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/bayesian_neural_network_opvi-advi.ipynb Yes i had the same error with NUTS and Metropolis
Maxim Kochurov
@ferrine
Try passing include_transformed=True to approx.sample
I remember some changes there
Nikos Koudounas
@aplamhden
The same error again.
Maxim Kochurov
@ferrine
So opening an issue is the only way now.
We’ll try to solve the problem before release
If it is possible, could you please provide minimal failing example?
Nikos Koudounas
@aplamhden
Sure, i am doing it atm.