Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Matthew Moloney
@moloneymb
It's a good start for Brendan but it's not something that should be shown to a beginner who is not Brendan.
Haiping Chen
@Oceania2018
Many .NET beginer know nothing about tensorflow. Maybe too easy for you. The valuable thing is Brendan use tf.net to explore tensorflow. :-) . @moloneymb You're an expert though.
Matthew Moloney
@moloneymb
Feedback from Brendan would be valuable to us but we can't make this code an example for others. From the look of it it is just exploration code and includes a number of mistakes a beginner would make when exploring. Another beginner following in these footsteps would get lost. Brendan is free to explore and seeing where he makes mistakes is valuable for us but we can't promote it as an example to others.
Haiping Chen
@Oceania2018
Right, after all it's exploration and work in progress.
Matthew Moloney
@moloneymb
I may be confused here, by "example for beginners" you may mean "and example of a beginner"
Meinrad Recheis
@henon
stop bashing noobs @moloneymb :D
Johnathan Ingle
@jingle1000
Quick TensorFlow.Net question. I saw on the github repo's project tab that there is no project listed for the Keras API. After forking the repo, I found a Keras.Core project. If I want to add support for the Tensorflow.Keras namespace, should I make changes in that project?
Eli Belash
@Nucs
@Oceania2018
Haiping Chen
@Oceania2018
@jingle1000 You should put TensorFlow.Keras into TensorFlowNET.Core project: https://github.com/SciSharp/TensorFlow.NET/tree/master/src/TensorFlowNET.Core/Keras
Arnav Das
@arnavdas88
@jingle1000 , if you are trying to Port the old keras, yes... Go on and update the code base on Keras.Core But in case of tf.keras, add you code base to TensorflowNet.Core
Ghost
@ghost~558a241715522ed4b3e29780
how complete is the TensorFlow.Keras compared to Keras.NET? I'm translating a model that is written in keras (in python) and it uses some tensorflow for its softmax_cross_entropy_with_logits to compute the loss, and I'm thinking I might just leave that in python and load it as a module instead of trying to translate since Tensorflow.NET and Keras.NET have different types for tensors
Meinrad Recheis
@henon
@Oceania2018 or @arnavdas88 can you answer ?
Ghost
@ghost~558a241715522ed4b3e29780
I'm thinking now that I'll just import the python module with Python.Runtime and pass the PyObject to Keras.NET. Thanks!
Arnav Das
@arnavdas88
https://github.com/SciSharp/Keras.NET will be more suitable as tf.keras still needs a huge code base to be updated....
Arnav Das
@arnavdas88
The current situation of https://github.com/SciSharp/Keras.NET project can only be clarified by it's author @deepakkumar1984 but it looks pretty complete to me...
Ghost
@ghost~558a241715522ed4b3e29780
thanks @arnavdas88 , indeed Keras.NET is really awesome and is quite solid.
Sergio Pedri
@Sergio0694
Hey everyone, I've been working on a new project for the last week or so, it's a .NET Standard 2.0 library called ComputeSharp, which basically lets you write GPU kernels directly in C#, and then compiles them at runtime into HLSL compute shaders (thank you so much Microsoft for DX12) and runs them. There are also APIs to allocate GPU memory, read it back etc.
I did a quick test with a benchmark project in the repository, I basically wrote a fully connected layer on both CPU (with Parallel.For and using Unsafe.Add for extra speed) and GPU (using this library), and tested it out on an input tensor of size 128*512*512 on my notebook (i7 8750H 6C/12T @4Ghz, GTX1050), and this is what I got:
image.png
It's a 270x performance improvement in this scenario
If you don't want to just work on GPU memory, but need to allocate the input arrays to GPU and copy the results back every time, the GPU time is around 0.3s in this test, which is still over 11x the CPU implementation.
I think this might project might have some useful applications, as there weren't other ways to easily run code on the GPU from C#, especially for developers with no knowledge of GPU-related technologies. I might even try to use this into my NeuralNetwork.NET library at some point
@Oceania2018 @henon I know this isn't strickly DL-related, but I felt like sharing this since it might very well be used in this area as well 😊
If anyone wants to take a look at the project, feel free to let me know!
Meinrad Recheis
@henon
@Sergio0694 wow, definitely very interesting for us.
Meinrad Recheis
@henon
However, I don't get the example from the quickstart:
// Allocate a writeable buffer on the GPU, with the contents of the array
using ReadWriteBuffer<float> buffer = Gpu.Default.AllocateReadWriteBuffer<float>(1000);

// Run the shader
Gpu.Default.For(1000, id => buffer[id.X] = id.X);

// Get the data back
float[] array = buffer.GetData();
what is id and why does it have an X property? is it a point in an ND space?
Haiping Chen
@Oceania2018
id is range of 0 - 999
Meinrad Recheis
@henon

id is range of 0 - 999

I thought so too but why id.X

Haiping Chen
@Oceania2018
@Sergio0694 I need take a sit and research your project. Apparently, it's an awesome job. @Nucs NumSharp performance optimization related.
Meinrad Recheis
@henon
oh, I see now. you can compute with up to three-dimensional values
Sergio Pedri
@Sergio0694
@henon That's how GPU kernels work in both DX12 and CUDA.
Basically, you can dispatch a kernel on a given range, up to 3D.
The kernel takes as input that ThreadIds value that contains the current index along the 3 axis.
It's basically as if you had an i, j and k tuple of indices for 3 nested loops
Of course, if you only iterate on one axis, you can ignore the last two, etc.
This implements a fully connected forward layer
You can see I'm dispatching the kernel on 3 axes, and the kernel itself only has a single for loop, which just computes the result value for a single target element
Eli Belash
@Nucs
Overall impressive work, making it truely quite easy to support fairly complex functions.
Any support for ops similar to what System.Math provides? (e.g. gpu's sin)
Sergio Pedri
@Sergio0694
And there the 3 indices would be: id.X the sample index, id.Y the i index on each matrix, for an input sample, and id.Z the j index
@Oceania2018 That's great to hear, thanks! Let me know what you think when you take a look at it! 😄
@Nucs Thanks! And yes, you can use many of the methods in both Math and MathF (eg. Math.Pow), and they get automatically mapped to HLSL functions
Meinrad Recheis
@henon
ok, it makes perfect sense now
Eli Belash
@Nucs
Great
Sergio Pedri
@Sergio0694
I'm also trying to publish this as a NuGet package but I've been trying for 4 days now to get that to work with no success
Eli Belash
@Nucs
how do you handle graphics cards's compute capabilities? Or is it handled by the directx library?
Sergio Pedri
@Sergio0694
Eli Belash
@Nucs
Yeah, I'll dm you
Sergio Pedri
@Sergio0694
@Nucs Thanks!
As for the GPU capabilities, I query the available devices that support DX12.1, and use the first one by default.
The Gpu class also exposes all the available GPUs, so you can specifically target one in your system (or more)
Eg. if you wanted to use your secondary GPU, for instance
Meinrad Recheis
@henon
@Sergio0694 : I only flew over it, how about embedding the resources you need in the assembly? is that not an option?
Haiping Chen
@Oceania2018
@Sergio0694 Are you going to use computesharp to improve your neural network project?