Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    chen wei
    @auroua
    I am study gibbs sampling method recently, I can't understand how to sample from p(x1|x2, x3)
    Thomas Wiecki
    @twiecki
    do you have a specific example?
    chen wei
    @auroua
    Does sample from this conditional distribution is easier to directly sample from the joint distribution
    Thomas Wiecki
    @twiecki
    yes, exactly
    this is for cases where you can't sample directly from the joint distribution
    chen wei
    @auroua
    if the dim is very high
    Thomas Wiecki
    @twiecki
    there are also many very simple models where you can't do it directly
    chen wei
    @auroua
    I saw a example about two-dim gaussian distribution
    How does the software implement about samples from a gaussian distribution
    Thomas Wiecki
    @twiecki
    there's a great book called "Doing Bayesian Data Analysis"
    that goes through it
    chen wei
    @auroua
    I am reading pattern recognize and machine learning
    In chapter 11
    This book give a simple method
    first generate a random number from uniform distribution over the interval (0, 1)
    and then use a equation to caculae the result, and the result is from a gaussian distribution
    Does the software implement in this way?
    Thomas Wiecki
    @twiecki
    that sounds more like importance sampling
    we don't implement that
    chen wei
    @auroua
    It's not important sampling
    that describes it all
    and also what and how we implement it
    chen wei
    @auroua
    importantance sampling caculate the expetations
    Ok thanks
    chen wei
    @auroua
    @twiecki hello
    I read another article "Bayesian Deep Learning"
    Does the bayesian deep learning just means using variational inference method to approximation the poster distribution of the weights in the neural network?
    But how could I know this method is better than the traditional deep learning training method like using SGD to find the best weights?
    Thomas Wiecki
    @twiecki
    we don't know that yet
    chen wei
    @auroua
    @twiecki thanks
    you blog is excellent!
    thanks
    sushmit-staples
    @sushmit-staples
    @twiecki
    @twiecki Has anybody tried porting the programs for second edition?
    @aloctavodia ^^^
    Osvaldo Martin
    @aloctavodia
    @sushmit-staples you may want to check https://github.com/JWarmenhoven/DBDA-python I understand is not complete, but I think is a good place to start.
    sushmit-staples
    @sushmit-staples
    @aloctavodia Thanks so much
    I was thinking to port the second edition exercises in python
    Thomas Wiecki
    @twiecki
    @sushmit-staples I'm sure @Jwarmenhoven appreciates help
    Osvaldo Martin
    @aloctavodia
    Yeah! It seems he is the only one contributing code :-(
    sushmit-staples
    @sushmit-staples
    I will try to add materials to it, if stuck with anything will use this forum for help :)
    fbparis
    @fbparis
    Hello all :)
    I have a naive question about models: in every tutorials I’ve read they study a simple problem where they can graph the datas in 2D or 3D and then guess a model to try
    But what if there are too much dimensions and you can’t graph anything to visualize the datas
    also i was wondering if the process of finding a model from datas can be automated
    Osvaldo Martin
    @aloctavodia

    @fbparis More than naive I think, your question is central. I guess the general approach is trying to think about how the data was generated, visualizing data is very helpful to achieve this, but is not the only way to do it. Background knowledge can also provide you with the "intuition" about how parameters could be related and you should try to express those relationships in the models. Other elements you can add to the mix is "defaults", like choosing weakly informative priors, thinks that generally works at least as a first guess. Another point is tha model building is iterative so you do not have to guess in the first try, on the contrary that's generally never the case (unless you have being modeling the same type of data over and over again). You can start with very simple models and add complexity on step at a time and then check if that complexity pay off or does not add anything useful. Not sure this is helpful (you can try posting your question at https://discourse.pymc.io/).

    About your second question, I guess the closer thing to that are non-parametric models, check this intro to Gaussian Process http://docs.pymc.io/gp.html to get a better feeling of what I am talking about.

    fbparis
    @fbparis
    thanks very much @aloctavodia
    one last thing: should pymc3 be chosen over pymc in every cases? and what about pgmpy which looks like a good library too?
    Osvaldo Martin
    @aloctavodia
    you are welcome! Yes you should use PyMC3. It has been a long time since the last time I check pgmpy, so I can not provide a very informed answer, sorry!