Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 16:29
    michalkania starred SciSharp/TensorFlow.NET
  • 14:50
    Elok1 closed #27
  • 14:50
    Elok1 commented #27
  • 14:23
    lrolsen commented #95
  • 14:20
    lrolsen commented #95
  • 14:19
    lrolsen commented #95
  • 13:04
    sami016 starred SciSharp/TensorFlow.NET
  • 11:49
    Oceania2018 commented #513
  • 10:00
    Skjoldahl commented #95
  • 03:23
    BenKalegin starred SciSharp/Numpy.NET
  • Feb 20 20:32
    deepakkumar1984 commented #95
  • Feb 20 18:49
    Endarren starred SciSharp/BotSharp
  • Feb 20 17:43
    jiamaozheng starred SciSharp/TensorFlow.NET
  • Feb 20 17:30
    query-js starred SciSharp/BotSharp
  • Feb 20 17:15
    mmoreira2000 starred SciSharp/TensorFlow.NET
  • Feb 20 17:12
    Skjoldahl opened #95
  • Feb 20 16:02
    henon commented #27
  • Feb 20 15:25
    Elok1 closed #93
  • Feb 20 15:25
    Elok1 commented #93
  • Feb 20 15:24
    Elok1 opened #27
Arnav Das
@arnavdas88
https://github.com/SciSharp/Keras.NET will be more suitable as tf.keras still needs a huge code base to be updated....
Arnav Das
@arnavdas88
The current situation of https://github.com/SciSharp/Keras.NET project can only be clarified by it's author @deepakkumar1984 but it looks pretty complete to me...
Ghost
@ghost~558a241715522ed4b3e29780
thanks @arnavdas88 , indeed Keras.NET is really awesome and is quite solid.
Sergio Pedri
@Sergio0694
Hey everyone, I've been working on a new project for the last week or so, it's a .NET Standard 2.0 library called ComputeSharp, which basically lets you write GPU kernels directly in C#, and then compiles them at runtime into HLSL compute shaders (thank you so much Microsoft for DX12) and runs them. There are also APIs to allocate GPU memory, read it back etc.
I did a quick test with a benchmark project in the repository, I basically wrote a fully connected layer on both CPU (with Parallel.For and using Unsafe.Add for extra speed) and GPU (using this library), and tested it out on an input tensor of size 128*512*512 on my notebook (i7 8750H 6C/12T @4Ghz, GTX1050), and this is what I got:
image.png
It's a 270x performance improvement in this scenario
If you don't want to just work on GPU memory, but need to allocate the input arrays to GPU and copy the results back every time, the GPU time is around 0.3s in this test, which is still over 11x the CPU implementation.
I think this might project might have some useful applications, as there weren't other ways to easily run code on the GPU from C#, especially for developers with no knowledge of GPU-related technologies. I might even try to use this into my NeuralNetwork.NET library at some point
@Oceania2018 @henon I know this isn't strickly DL-related, but I felt like sharing this since it might very well be used in this area as well 😊
If anyone wants to take a look at the project, feel free to let me know!
Meinrad Recheis
@henon
@Sergio0694 wow, definitely very interesting for us.
Meinrad Recheis
@henon
However, I don't get the example from the quickstart:
// Allocate a writeable buffer on the GPU, with the contents of the array
using ReadWriteBuffer<float> buffer = Gpu.Default.AllocateReadWriteBuffer<float>(1000);

// Run the shader
Gpu.Default.For(1000, id => buffer[id.X] = id.X);

// Get the data back
float[] array = buffer.GetData();
what is id and why does it have an X property? is it a point in an ND space?
Haiping Chen
@Oceania2018
id is range of 0 - 999
Meinrad Recheis
@henon

id is range of 0 - 999

I thought so too but why id.X

Haiping Chen
@Oceania2018
@Sergio0694 I need take a sit and research your project. Apparently, it's an awesome job. @Nucs NumSharp performance optimization related.
Meinrad Recheis
@henon
oh, I see now. you can compute with up to three-dimensional values
Sergio Pedri
@Sergio0694
@henon That's how GPU kernels work in both DX12 and CUDA.
Basically, you can dispatch a kernel on a given range, up to 3D.
The kernel takes as input that ThreadIds value that contains the current index along the 3 axis.
It's basically as if you had an i, j and k tuple of indices for 3 nested loops
Of course, if you only iterate on one axis, you can ignore the last two, etc.
This implements a fully connected forward layer
You can see I'm dispatching the kernel on 3 axes, and the kernel itself only has a single for loop, which just computes the result value for a single target element
Eli Belash
@Nucs
Overall impressive work, making it truely quite easy to support fairly complex functions.
Any support for ops similar to what System.Math provides? (e.g. gpu's sin)
Sergio Pedri
@Sergio0694
And there the 3 indices would be: id.X the sample index, id.Y the i index on each matrix, for an input sample, and id.Z the j index
@Oceania2018 That's great to hear, thanks! Let me know what you think when you take a look at it! 😄
@Nucs Thanks! And yes, you can use many of the methods in both Math and MathF (eg. Math.Pow), and they get automatically mapped to HLSL functions
Meinrad Recheis
@henon
ok, it makes perfect sense now
Eli Belash
@Nucs
Great
Sergio Pedri
@Sergio0694
I'm also trying to publish this as a NuGet package but I've been trying for 4 days now to get that to work with no success
Eli Belash
@Nucs
how do you handle graphics cards's compute capabilities? Or is it handled by the directx library?
Sergio Pedri
@Sergio0694
Eli Belash
@Nucs
Yeah, I'll dm you
Sergio Pedri
@Sergio0694
@Nucs Thanks!
As for the GPU capabilities, I query the available devices that support DX12.1, and use the first one by default.
The Gpu class also exposes all the available GPUs, so you can specifically target one in your system (or more)
Eg. if you wanted to use your secondary GPU, for instance
Meinrad Recheis
@henon
@Sergio0694 : I only flew over it, how about embedding the resources you need in the assembly? is that not an option?
Haiping Chen
@Oceania2018
@Sergio0694 Are you going to use computesharp to improve your neural network project?
Sergio Pedri
@Sergio0694
@henon Yeah that's my backup plan, if I can't get this to work in a while I might just move the template in a class, with the text as a constant string, and use that for now, and eventually move back to the file in the future if I can figure it out
@Oceania2018 Not right now as I'm pretty busy with university, but yeah I've thought about it.
That should give it a pretty nice performance boost , especially if I make it GPU-only so that I can keep the memory just on the GPU most of the time, and avoid havign to copy it back and forth on the CPU
Sergio Pedri
@Sergio0694
Alright, so thanks to @henon the template issue has now been fixed. Now I just need to wait for the DotNetDxc package to be updated with the new .dll files, and I'll be able to publish ComputeSharp to NuGet 👍
Brendan Mulcahy
@BrendanMulcahy
@Oceania2018 @moloneymb Thanks! I agree with Matthew btw, someone could learn from my code but it's something I'm just hacking together while trying to follow this python book. There are some cases where the book is not perfect, maybe the second edition has fixes? Would love feedback btw if you have free time to submit a PR :)! I'm going to keep working on the code.
I made you guys a video of a game/machine learning project that I have been working on for about a year in my free time: https://youtu.be/5kCkAW5gHEc I'm hoping to port it to TensorFlow.NET from TensorFlowSharp. Let me know what you think!
Matthew Moloney
@moloneymb
@BrendanMulcahy Looks good. I've thought about building similar games around Deep Learning. I was considering doing it in C++/OpenGL/OpenCV/Webasm.
Brendan Mulcahy
@BrendanMulcahy
I'm not much of a C++ programmer. I have tinkering around with a lot of languages but I can mainly only "do work" in C# and python
If there is anyway I can help you guys out more directly let me know. I tend to work/learn best when I'm on a mission of sorts so I can probably give the most helpful feedback if Im working on this game and using Tensorflow.NET
But I can try to lend help in other ways when I get stuck/bored and need something to switch onto for a bit
BTW, I've never really work on an open-source project, so I dont 100% know the etiquette so I might need some help with that as well (I'm open to critical feedback btw so don't feel like you need to hold anything back)
Meinrad Recheis
@henon
@BrendanMulcahy you are welcome to join our team if you like. don't worry, we are a friendly group of developers who are passionate about ML and you'll quickly get the hang of it. even if you only take on only a small task it would help and be very much appreciated
Brendan Mulcahy
@BrendanMulcahy
@henon Im free for most of today, anything I can tackle in about a day of work?
Meinrad Recheis
@henon
I guess the best way to build up knowledge about TF.NET which you need for most tasks is to port your game to it