- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- Jan 30 07:37
ocramz on gh-pages

Add arrayfire (compare)

- Jan 02 12:51
ocramz on gh-pages

add inliterate (compare)

- Jan 02 12:43
ocramz on gh-pages

update hvega entry (compare)

- Jul 01 2019 09:43dmvianna added as member
- Jun 15 2019 04:55
ocramz on gh-pages

Add pcg-random (compare)

- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz labeled #42
- Jun 14 2019 16:08ocramz opened #42
- Jun 14 2019 16:08ocramz opened #42
- Jun 06 2019 18:21
ocramz on gh-pages

Fix graphite link Merge pull request #41 from alx… (compare)

- Jun 06 2019 18:21ocramz closed #41
- Jun 06 2019 18:21ocramz closed #41
- Jun 06 2019 17:32alx741 opened #41
- Jun 06 2019 17:32alx741 opened #41
- Jun 06 2019 16:46
ocramz on gh-pages

Add graphite Merge pull request #40 from alx… (compare)

- Jun 06 2019 16:46ocramz closed #40

this is hard in a downstream-library - but it should never happen there in the first place.

and you can do a type-based approach with rewrite-rules as well ... as they only fire if the types match up.. so you can encode things in i.e. symbol-types and use those in rewrite-rules..

but i think that this will just complicate things without a real gain.

everything else goes into some kind of edsl-terretory with other problems (i.e. FFI or similar at the borders)

that would yield you something like numpy .. where you move things explicitly to numpy & can then use blas/lapack/... for calculations.. but for printing/using non-numpy-functions you have to move things around again..

Anyone working in the Haskell+numbers intersection, consider submitting to FHPNC 2020! https://icfp20.sigplan.org/home/FHPNC-2020#Call-for-Papers

Note that FHPNC also accepts extended abstracts, so if you just want feedback/conversation, you can do that and still submit a full paper somewhere else later.

@mkrzywda Data science is a pretty broad field and there isn't one place to go for it Haskell. I am sure there are many resource folks here can recommend. I, as an author of

`massiv`

, can suggest learning how to manipulate arrays. If you'd like to start with `massiv`

check out documentation and some video tutorials
Here is also a list of libraries that will allow you to other tackle specific tasks: http://www.datahaskell.org/docs//community/current-environment.html

@/all new release of

`xeno`

with a bunch of speed improvements and new benchmarks thanks to @mgajda and Dmitry Krylov : https://hackage.haskell.org/package/xeno
Hi guys, I just released http://hackage.haskell.org/package/cas-hashable-1.0.1 and http://hackage.haskell.org/package/cas-store-1.0.1 on Hackage

they were previously part of funflow

I split them in their own packages so it's easier for a package to defined cached functions and hashable types to support them

it's not using the regular Hashable class as some data may need IO to be hashed (for instance an ExternallyAssuredFile is just a wrapper around a file path which hashes it based on its path, modification dates etc, not its content)

Or even stream in general - the mmap trick didn't stop it from allocating the whole file

In principle adding streaming and parallel parsing of large files is possible together, but most our customers have just databases of many smaller files AFAIK.

Hi, I'm running NixOs and currently struggle with getting `grenade`

to run. I have added `liblapack`

and `blas`

to my shell.nix. Building grenade fails with

```
Preprocessing test suite 'test' for grenade-0.1.0..
Building test suite 'test' for grenade-0.1.0..
[ 1 of 15] Compiling Test.Grenade.Layers.Internal.Reference ( test/Test/Grenade/Layers/Internal/Reference.hs, /home/rsoeldner/work/grenade/dist-newstyle/build/x86_64-linux/ghc-8.6.5/grenade-0.1.0/t/test/build/test/test-tmp/Test/Grenade/Layers/Internal/Reference.o ) [Numeric.LinearAlgebra changed]
[ 2 of 15] Compiling Test.Grenade.Recurrent.Layers.LSTM.Reference ( test/Test/Grenade/Recurrent/Layers/LSTM/Reference.hs, /home/rsoeldner/work/grenade/dist-newstyle/build/x86_64-linux/ghc-8.6.5/grenade-0.1.0/t/test/build/test/test-tmp/Test/Grenade/Recurrent/Layers/LSTM/Reference.o ) [Numeric.LinearAlgebra changed]
[ 4 of 15] Compiling Test.Grenade.Layers.Pooling ( test/Test/Grenade/Layers/Pooling.hs, /home/rsoeldner/work/grenade/dist-newstyle/build/x86_64-linux/ghc-8.6.5/grenade-0.1.0/t/test/build/test/test-tmp/Test/Grenade/Layers/Pooling.o )
<command line>: can't load .so/.DLL for: /home/rsoeldner/work/grenade/dist-newstyle/build/x86_64-linux/ghc-8.6.5/hmatrix-0.20.0.0/build/libHShmatrix-0.20.0.0-inplace-ghc8.6.5.so (/home/rsoeldner/work/grenade/dist-newstyle/build/x86_64-linux/ghc-8.6.5/hmatrix-0.20.0.0/build/libHShmatrix-0.20.0.0-inplace-ghc8.6.5.so: undefined symbol: zherk_)
```

does anyone have an idea how to overcome this issue ?

I just basically want to group by the first item in the tuple list

some time ago I started tackling this via Generics : https://github.com/ocramz/heidi/blob/master/src/Data/Generics/Encode/Internal.hs#L139 . Tuples and records are product types with the same generic representation

however for relational operations such as JOIN we need some sort of indexing across both rows and columns

I haven't published

`heidi`

on Hackage because I'm not happy with its ergonomics yet
@JonathanReeve what problems do you find with https://hackage.haskell.org/package/statistics-0.15.2.0/docs/Statistics-Sample-Histogram.html#v:histogram ?

I'm working on getting some operations to fuse in `DLA`

but I'm running into some rewriting rules issues.

More exactly I have that:

```
multiplyFusedCol m1 m2 = U.generate r1 (\i -> (U.generate (r1*c2) go) `U.unsafeIndex` (0 + i * c2))
where
r1 = M.rows m1
c2 = M.cols m2
go t = U.sum $ U.zipWith (*) (M.row m1 i) (M.column m2 j)
where (i,j) = t `quotRem` c2
multiplyFusedCol2 m1 m2 = column (M.Matrix n n $ U.generate (r1*c2) go) 0
where
r1 = M.rows m1
c2 = M.cols m2
go t = U.sum $ U.zipWith (*) (row m1 i) (M.column m2 j)
where (i,j) = t `quotRem` c2
column :: M.Matrix -> Int -> Vector Double
column m j= U.generate r (\i -> v `U.unsafeIndex` (j + i * c))
where r = M.rows m
c = M.cols m
v = M._vector m
{-# INLINE [1] column #-}
{-# RULES
"col/fuse" forall c v r j. column (M.Matrix r c v) j = U.generate r (\i -> v `U.unsafeIndex` (j + i * c))
#-}
```

(the idea the functions compute the 0th column M3 = M1 * M2)`multiplyFusedCol`

performs great, but `multiplyFusedCol2`

doesn't, despite the rule firing. I followed the same procedure for the `row`

operation, and it worked great, for some reason it doesn't work for the `row`

operation :/

Hello! I am a student and planning to do a course project related to DataHaskell (in case i find something interesting and reasonable for my skillset), one problem statement I found in the paper "Functional programming for modular Bayesian inference" (https://dl.acm.org/doi/10.1145/3236778) is to introduce "gradient-based techniques" to the library

`monad-bayes`

, however im a bit clueless in this area, ive only done basic stats course and stochastic processes so im not very familiar with Monte-carlo methods, so im wondering if anyone have any hints what I would need to read-up on to be able to investigate this further? I found a paper which seems to have done this but in Scala: https://arxiv.org/pdf/1908.02062.pdf . Im also open to other ideas that might be useful for the DataHaskell-project
@funrep I haven't used

`monad-bayes`

so I'm not sure how feasible it is, but if you you might consider adding a `hasktorch`

backend (ie, instances for `hasktorch`

`Tensor`

s) to the `monad-bayes`

api. You can look at what the pytorch ecosystem has done in terms of implementations (pyro, botorch, gpytorch)
(The benefit would be that you would then get highly-scalable computations with automatic differentiation.)

@o1lo01ol1o thanks for the idea! i emailed the author of the paper and he also recommended me to look into hasktorch. Since it's a school project i think my first step is to investigate autodiff, and perhaps some basic implementation myself, depending how it goes i will look into the potential to make a contribution to monad-bayes from it :-)

Engineering research tends to favor performance, but the ideas, the algorithms are what "lasts"

speaking of autodiff: showing in a simple fashion how to implement reverse-mode AD in terms of ContT could be a nice warm-up problem