- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- Jan 30 07:37
ocramz on gh-pages

Add arrayfire (compare)

- Jan 02 12:51
ocramz on gh-pages

add inliterate (compare)

- Jan 02 12:43
ocramz on gh-pages

update hvega entry (compare)

- Jul 01 2019 09:43dmvianna added as member
- Jun 15 2019 04:55
ocramz on gh-pages

Add pcg-random (compare)

- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz opened #42
- Jun 14 2019 16:08ocramz opened #42
- Jun 06 2019 18:21
ocramz on gh-pages

Fix graphite link Merge pull request #41 from alx… (compare)

- Jun 06 2019 18:21ocramz closed #41
- Jun 06 2019 18:21ocramz closed #41
- Jun 06 2019 17:32alx741 opened #41
- Jun 06 2019 17:32alx741 opened #41
- Jun 06 2019 16:46
ocramz on gh-pages

Add graphite Merge pull request #40 from alx… (compare)

- Jun 06 2019 16:46ocramz closed #40

@r_mohan_twitter take a look at the monad-bayes library

@r_mohan_twitter , we wrote a little intro about it here in case that you are interested:

https://www.tweag.io/posts/2019-09-20-monad-bayes-1.html

https://www.tweag.io/posts/2019-11-08-monad-bayes-2.html

otherwise these papers give a good overview of what it's doing:

https://dl.acm.org/citation.cfm?id=3236778

http://mlg.eng.cam.ac.uk/pub/pdf/SciGhaGor15.pdf

on probabilistic programming - i'm curious what people w/ more of a PL / FP background think of "Functional Tensors for Probabilistic Programming" https://arxiv.org/pdf/1910.10775.pdf

(wouldn't recommend this for day-to-day practical modeling yet, this is still pretty research-y for now)

@austinvhuang I actually read it some days ago. I very much agree with treating quantities as terms of some abstract syntax rather than constants. This is similar to how "push arrays" are implemented, and in fact deferring computation as long as possible to perfom symbolic simplifications is a very good idea

btw are tou going to PROBPROG in April?

*you

feedback would be nice

Hi, it's been a while (studies and stuff). I'm still motivated tho! I am currently looking at http://www.datahaskell.org/, so I have a few questions:

- who maintains it ?
- is it still updated ?

And more general questions : is there a general roadmap, a "place" where people here would like to take Data Haskell to ?

Sorry if those topics have been discussed many times already, but I believe that as time goes by it can be clearer and change

as for the roadmap .. everyone has different ideas on how that should look like.

what advantages does monad-bayes have over more traditional probablistic programming library such as say pymc3

I am just an occasional user but I find Monad Bayes quite comfortable to use. Here are three things that come to my mind at the moment, compared to pymc3:

- it integrates with standard Haskell syntax, you can sample from standard datatypes, functions and use do notation to combine those operations. With pymc3 you have to deal with theano tensors etc ...
- Haskell syntax make the code really concise. It really looks almost like what you would write with standard math notation in an article.
- Monad Bayes provides an abstraction on top of different inference representations and you can build new ones out of these basic building blocks. For example, mcmc requires a different representation of a prob distribution (in terms of accumulated log likelihood of the samples) than sequential monte carlo, or inverse sampling (cumulative distribution function) or a particle filter. Checkout table 1 in https://pdfs.semanticscholar.org/76ad/0090bf4a076391fe2cc6d6029f79ebc66308.pdf . AFAIK in pymc3 you basically chose a configurable out-of-the box sampler and than run it.

@gregnwosu yes exactly - afterwards you might have to adapt your sampler a bit. E.g. with hamilton monte carlo, you cannot easily sample from discrete distributions (atm).

This seems like a crowd that could potentially benefit from information in this blog post: https://www.reddit.com/r/haskell/comments/edr9n4/random_benchmarks/

is anyone else experiencing long compilation times with Frames?

Hi!

I'm proposing new lens-based API for statistics: bos/statistics#162 What to you think about it?

TL/DR example of use: `meanOf (each . filtered (>0) . to log)`

will compute mean of logarithm of every positive number.