Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • Feb 24 07:46
    rillk500 starred libmir/mir
  • Nov 17 2020 07:14
    aminya starred libmir/mir
  • Nov 17 2020 01:23
    wrmsr starred libmir/mir
  • Nov 15 2020 09:37

    9il on v3.2.2


  • Nov 15 2020 08:44

    9il on master

    update gitignore (compare)

  • Sep 21 2020 17:29
    dangkhoasdc starred libmir/mir
  • Sep 12 2020 20:22
    breandan starred libmir/mir
  • Sep 01 2020 03:04
  • Jul 17 2020 02:49
    shikuijie starred libmir/mir
  • Jul 04 2020 06:01
    tastyminerals commented #403
  • Jul 03 2020 07:09
    9il commented #403
  • Jul 03 2020 05:16
    tastyminerals starred libmir/mir
  • Jul 03 2020 05:15
    tastyminerals closed #403
  • Jul 03 2020 05:15
    tastyminerals commented #403
  • Jul 03 2020 05:13
    tastyminerals commented #403
  • Jul 03 2020 05:12
    tastyminerals commented #403
  • Jul 03 2020 04:03
    9il commented #403
  • Jul 03 2020 01:25
    mridsole starred libmir/mir
  • Jul 02 2020 18:29
    tastyminerals edited #403
  • Jul 02 2020 18:29
    tastyminerals opened #403
Ilya Yaroshenko
Nicholas Wilson
@Marenz you should make alignment an enum, it doesn't change.
Nicholas Wilson
First set of docs for dcompute: libmir/dcompute#37
Mathias L. Baumann
I am trying to port a python example for a neural net with a hidden layer to D2/mir
And.. everything seems to be working, only that it.. doesn't
As far as I can tell, my code is identical
It's a lot to ask to compare those and get feedback on the code, so I understand if you feel like you don't want to ;)-
I just thought I try asking here because I am a bit out of ideas currently
Ilya Yaroshenko
Hi Mathias,
I would suggest to do step by step verification. It is common for SCI code to do so.
After quick review i found one small bug (but there may be also others): Random gen is not initialized.
You may want to use https://github.com/libmir/mir-random as wall as https://github.com/libmir/numir to generate random slices. Mir Random has default contrcuttor disabled for RNGs, so it is safer then Phobos.
Mathias L. Baumann
it actually is, here: https://github.com/Marenz/neural_net_examples/blob/master/lstm/source/app.d#L64 if you mean seed when you say initialized?
I tried to construct it in a way that it matches the random numbers generated in python
I got surprisingly close
to see where it starts diverging
but there was no clear point
Ilya Yaroshenko
Ah, ok. BTW keep in mind that gen is TLS variable
Mathias L. Baumann
I don't plan on more threads yet :)
Francis Nixon
Hello. I'm having trouble compiling dcompute on debian 9. My dub.json looks like this:
    "name": "compute-messing-around",
    "authors": [
        "Francis Nixon"
    "description": "A minimal D application.",
    "license": "proprietary",
    "dependencies": {
        "dcompute": "~>0.1.0"
    "dflags": ["-mdcompute-targets=ocl-210,cuda-350","-oq"]
The error I'm currently getting is:
../../.dub/packages/dcompute-0.1.0/dcompute/source/dcompute/driver/error.d(143,13): Error: undefined identifier `fprintf`
../../.dub/packages/dcompute-0.1.0/dcompute/source/dcompute/driver/ocl/context.d(144,19): Error: undefined identifier `clCreateProgramWithIL`
Nicholas Wilson
Thats embarrassing. libmir/dcompute@cee8eb3
I'm not sure why your clCreateProgramWithIL can't be found. Are you using an up to date DerelictCL?
Francis Nixon
The clCreateProgramWithIL error went away after manually selecting the most recent version of DerelictCL. I then got an error in the same place as the fprintf error, except for toStringz. Adding an import fixed that, but now I'm getting the following:
../../.dub/packages/dcompute-0.1.0/dcompute/source/dcompute/driver/error.d(139,32): Error: cannot implicitly convert expression `__lambda1` of type `void delegate(Status _status) @system` to `immutable(void delegate(Status) nothrow @nogc)`
../../.dub/packages/dcompute-0.1.0/dcompute/source/dcompute/driver/error.d(139,32): Error: cannot implicitly convert expression `__lambda1` of type `void delegate(Status _status) @system` to `immutable(void delegate(Status) nothrow @nogc)`
Nicholas Wilson
Hmm, that does seem odd, but that's my fault for not testing it properly.
Francis Nixon
If relevant my ldc version is:
LDC - the LLVM D compiler (1.7.0git-958e58c):
  based on DMD v2.077.1 and LLVM 3.8.1
  built with DMD64 D Compiler v2.077.1
  Default target: x86_64-pc-linux-gnu
  Host CPU: broadwell
Nicholas Wilson
Unless you have need to use the D_betterC version try without it. That should "work" (note the @BUG@ just above). You will need to set onDriverError yourself because of it, see e.g.
Thats not a compiler problem, thats me not testing properly. I really need to set up CI, but given the hardware required I haven't got around to it yet.
Thanks for pointing out the issues.
Francis Nixon
Without -betterC I get:
Invalid bitcast
  %3 = bitcast float addrspace(1)* %res_arg to float*
Invalid bitcast
  %5 = bitcast float addrspace(1)* %x_arg to float*
Invalid bitcast
  %9 = bitcast float addrspace(1)* %y_arg to float*
LLVM ERROR: Broken function found, compilation aborted!
Nicholas Wilson
Hmm, can you try with an LLVM that is 3.9 or greater (e.g. from the LDC release page)?
Also what registered targets does the LDC you are using have (just below the output of ldc2 --version you posted)?
Sebastian Wilzbach
I have two good news:
1) libmir/mir-algorithm#122 - examples on the mir docs will be runnable soon (see http://files.wilzbach.me/dlang/mir-algorithm/mir_ndslice_algorithm.html)
2) https://tour.dlang.org/tour/en/dub/mir - the tour will be finally moving to integrate mir (thought writing a good one-page summary might turn out to be challenging)
Ilya Yaroshenko
Some news:
PR "Tarjan graph algorithm" libmir/mir-algorithm#121
Issue "ndslice based API for dopt" henrygouk/dopt#6
Issue "Dcompute based backend for dopt" henrygouk/dopt#6
Ilya Yaroshenko
EDIT: Issue "ndslice based API for dopt" henrygouk/dopt#7
Ilya Yaroshenko
Hi Everybody I am trying to use multivariateNormalVar in mir.random.ndvariable for creating two separated data clusters (it is done in pyhton like: https://beckernick.github.io/logistic-regression-from-scratch/)
It doesn't work if I don't select resulting vector same size as sigma matrix
double[100] x1;   // Only works if the size is only 2 
Random* gen = threadLocalPtr!Random;
auto mu = [0.0, 0.0].sliced;
auto sigma = [1.0, 0.75, 0.75, 1].sliced(2,2);
auto rv = multivariateNormalVar(mu, sigma);
rv(gen, x1[]);
How can I use multivariateNormalVar to create data sets bigger than size 2
Shigeki Karita

how about this? (3dim random normal x 10)

dependency "lubeck" version="~>0.0.4"
dependency "numir" version="~>0.1.0"
libs "blas"

import mir.ndslice : map, sliced, slicedField, ndarray;
import mir.random : threadLocalPtr, Random;
import mir.random.variable : NormalVariable;
import mir.random.algorithm : field;
import lubeck : mtimes;

import numir : alongDim;
import std.stdio;

void main() {
    Random* gen = threadLocalPtr!Random;
    auto mu = [0.0, 0.0, 0.0].sliced;
    auto sigma = [1.0, 0.75, 0.0,
                  0.75, 1.0, 0.75,
                  0.0, 0.75, 1].sliced(3,3);
    auto xs = field(gen, NormalVariable!double(0, 1)).slicedField(10, 3);
    auto x1 = xs.mtimes(sigma).alongDim!1.map!(x => x + mu).ndarray;
    x1.writeln; // 10 x 3 dim

The multi normal random value is just an affine transformation of the standard normal random values. https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation

for creating two separated data clusters

Oh you wanna create two clusters! you can take my example in https://github.com/ShigekiKarita/d-tree/blob/master/example/plot_boundary/app.d

it looks like this

Thanks I will check it out for now I come up with this solution
double[num_dimensions] x;

double[num_observations] x1;
double[num_observations] x2;
Random* gen = threadLocalPtr!Random;

auto mu = [0.0, 0.0].sliced;
auto sigma = [1.0, 0.75, 0.75, 1].sliced(num_dimensions,num_dimensions);
auto rv = multivariateNormalVar(mu, sigma);

void GenerateAndAssign(R)( R range, int index )
    rv(gen, x[]);
    range[index..(index+2)] = x;    
iota(0, num_observations, 2).each!(  a=> GenerateAndAssign(x1[], a) );

mu = [1.0, 4.0].sliced;
rv = multivariateNormalVar(mu, sigma);
iota(0, num_observations, 2).each!(  a=> GenerateAndAssign(x2[], a) );
Hi I am trying really hard to use desicionTree just as https://github.com/ShigekiKarita/d-tree/blob/master/example/plot_boundary/app.d
Unfortunately I couldn't made this work :
enum numberOfFeatures = 7; 
enum numberOfOutputs = 2;
auto dataMatrix = dataRaw.sliced( dataRaw.length/numberOfFeatures , numberOfFeatures ).slice;  
auto labelVector = labelRaw.sliced( ).slice;

auto gtree = ClassificationTree!gini(numberOfOutputs);
gtree.fit(dataMatrix, labelVector);
I spent too much time on it but couldn't succeed to compile
But meanwhile the code in the example compiles:
auto nsamples = 200;
auto ndim = 2;
auto xs = normal(nsamples, ndim).slice;
// TODO: add to numir.random
auto gen = Random(unpredictableSeed);
auto rv = BernoulliVariable!double(0.5);
auto ys = iota(nsamples).map!(i => cast(long) rv(gen)).slice;
foreach (i; 0 .. nsamples) {
    if (ys[i] == 1.0) { xs[i][] += 2.0; }

auto gtree = ClassificationTree!gini(2, 10);
gtree.fit(xs, ys);