- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

Python/PyMC3 versions of the programs described in Doing bayesian data analysis by John K. Kruschke

this is for cases where you can't sample directly from the joint distribution

How does the software implement about samples from a gaussian distribution

that goes through it

In chapter 11

This book give a simple method

first generate a random number from uniform distribution over the interval (0, 1)

and then use a equation to caculae the result, and the result is from a gaussian distribution

Does the software implement in this way?

we don't implement that

that describes it all

and also what and how we implement it

Ok thanks

I read another article "Bayesian Deep Learning"

Does the bayesian deep learning just means using variational inference method to approximation the poster distribution of the weights in the neural network?

But how could I know this method is better than the traditional deep learning training method like using SGD to find the best weights?

you blog is excellent!

thanks

@twiecki Has anybody tried porting the programs for second edition?

@aloctavodia ^^^

@sushmit-staples you may want to check https://github.com/JWarmenhoven/DBDA-python I understand is not complete, but I think is a good place to start.

I was thinking to port the second edition exercises in python

I have a naive question about models: in every tutorials I’ve read they study a simple problem where they can graph the datas in 2D or 3D and then guess a model to try

But what if there are too much dimensions and you can’t graph anything to visualize the datas

also i was wondering if the process of finding a model from datas can be automated

@fbparis More than naive I think, your question is central. I guess the general approach is trying to think about how the data was generated, visualizing data is very helpful to achieve this, but is not the only way to do it. Background knowledge can also provide you with the "intuition" about how parameters could be related and you should try to express those relationships in the models. Other elements you can add to the mix is "defaults", like choosing weakly informative priors, thinks that generally works at least as a first guess. Another point is tha model building is iterative so you do not have to guess in the first try, on the contrary that's generally never the case (unless you have being modeling the same type of data over and over again). You can start with very simple models and add complexity on step at a time and then check if that complexity pay off or does not add anything useful. Not sure this is helpful (you can try posting your question at https://discourse.pymc.io/).

About your second question, I guess the closer thing to that are non-parametric models, check this intro to Gaussian Process http://docs.pymc.io/gp.html to get a better feeling of what I am talking about.

one last thing: should pymc3 be chosen over pymc in every cases? and what about pgmpy which looks like a good library too?