Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Haiping
@Oceania2018
@estellise-yukihime I recommend take a look at https://github.com/SciSharp/SciSharp-Stack-Examples, then make your own decision.
1 reply
Estellise Yukihime
@estellise-yukihime
image.png
Hello, can I ask how to use how to use predict?
CodeRabbit957
@CodeRabbit957
Hi. My code is giving a NotImplementedException on this line:
NDArray X_train = np.array(xTrain); // xTrain = int[][]
Haiping
@Oceania2018
@CodeRabbit957 Please use mutli dims array int[,]
CodeRabbit957
@CodeRabbit957
The lack of useful error messages is making it hard to work out what's going wrong.
CodeRabbit957
@CodeRabbit957
image.png
Can someone please tell me what I doing wrong?
CodeRabbit957
@CodeRabbit957
Message=Value cannot be null. (Parameter 'source')
Source=System.Linq
StackTrace:
at System.Linq.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument)
at System.Linq.Enumerable.TryGetLastTSource
at System.Linq.Enumerable.LastTSource
at Tensorflow.Keras.Layers.Dense.build(Tensors inputs)
CodeRabbit957
@CodeRabbit957
Do I need to define an input shape? If so, how? (i.e. what does input 'shape' mean?)
CodeRabbit957
@CodeRabbit957
@Oceania2018
System.ArgumentNullException
HResult=0x80004003
Message=Value cannot be null. (Parameter 'first')
Source=System.Linq
StackTrace:
at System.Linq.ThrowHelper.ThrowArgumentNullException(ExceptionArgument argument) in //src/libraries/System.Linq/src/System/Linq/ThrowHelper.cs:line 12
at System.Linq.Enumerable.ZipTFirst,TSecond,TResult in /
/src/libraries/System.Linq/src/System/Linq/Zip.cs:line 14
at Tensorflow.Keras.Engine.Functional.run_internal_graph(Tensors inputs, Boolean training, Tensors mask)
at Tensorflow.Keras.Engine.Functional.Call(Tensors inputs, Tensor state, Nullable1 training) at Tensorflow.Keras.Engine.Layer.<>c__DisplayClass1_0.<Apply>b__0(NameScope scope) at Tensorflow.Binding.tf_with[T](T py, Action1 action)
at Tensorflow.Keras.Engine.Layer.Apply(Tensors inputs, Tensor state, Boolean training)
at Tensorflow.Keras.Engine.Model.train_step(Tensor x, Tensor y)
at Tensorflow.Keras.Engine.Model.train_step_function(OwnedIterator iterator)
at Tensorflow.Keras.Engine.Model.FitInternal(Int32 epochs, Int32 verbose)
at Tensorflow.Keras.Engine.Model.fit(NDArray x, NDArray y, Int32 batch_size, Int32 epochs, Int32 verbose, Single validation_split, Boolean shuffle, Int32 initial_epoch, Int32 max_queue_size, Int32 workers, Boolean use_multiprocessing)
at MyraTest3.Chatbot.Model.TrainModel(Int32[,] xTrain, Int32[] yTrain)
Haiping
@Oceania2018
@CodeRabbit957 Can you provide UnitTest to reproduce the issue?
CodeRabbit957
@CodeRabbit957

@Oceania2018 I'm new to neural networks and tensorflow. I'm not sure what I would unit test.
Would this help?:

https://1drv.ms/f/s!AHAJVZtIv3lxfg

Stephan Vedder
@feliwir
Hey, i'm currently stuck with this: SciSharp/TensorFlow.NET#848
Is there any way i can create a Graph from a GraphDef instance?
The Import functions only work on "real" files
Stephan Vedder
@feliwir
And how can i fetch metadata like this in TensorFlow.NET: https://github.com/mozilla/DeepSpeech/blob/master/native_client/tfmodelstate.cc#L101
hjh1649
@hjh1649
Can TensorFlow.NET be used in Unity3D and MobileDevices(android & iOS)'s environment?
Ghost
@ghost~6144c3e26da037398485ee7b
@BrendanMulcahy I saw your conversation from 2019 while I was searching for ML-Agents example usages outside Unity (I'm trying to make it work with MG). I was wondering what you were able to do up until now. (also any recommendations/advice would be nice :))
stwolos
@stwolos
Hi Mike @mikesneider , have you managed to resolve your problem with HDF5CSharp FileLoadError? I'm facing same problem right now
Did anybody had problem with saving model? When Im saving through model.save(path) there is no new files even though I have no exceptions.
lubenjie
@lubenjie
调用tf.nn.moments的时候出异常:Attempting to capture an EagerTensor without building a function. 请问如何处理呢?
tcwicks
@tcwicks
does anyone have even a basic example of tensorflow.net (v 2) without keras ?
The unit tests are way too basic they just test individual functions. I'm trying to convert a lot of code I had done ages ago that uses Placeholders etc...
p.s. without Keras. I mean tensorflow native.
tcwicks
@tcwicks
Sorry just discovered the answer to my question: https://github.com/SciSharp/SciSharp-Stack-Examples
tcwicks
@tcwicks
Hoping someone can help me with this.
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NotImplementedException
at Tensorflow.Gradients.math_grad._SumGrad(Operation op, Tensor[] grads)
at Tensorflow.Gradients.math_grad._MeanGrad(Operation op, Tensor[] grads)
--- End of inner exception stack trace ---
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.RuntimeType.InvokeMember(String name, BindingFlags bindingFlags, Binder binder, Object target, Object[] providedArgs, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParams)
at Tensorflow.ops.<>c_DisplayClass1_1.<RegisterFromAssembly>b_3(Operation oper, Tensor[] out_grads)
at Tensorflow.gradients_util.<>c_DisplayClass0_2.<_GradientsHelper>b_6(NameScope scope1)
at Tensorflow.Binding.tf_withT
at Tensorflow.gradients_util.<>c_DisplayClass0_0.<_GradientsHelper>b_0(NameScope scope)
at Tensorflow.Binding.tf_withT
at Tensorflow.gradients_util._GradientsHelper(Tensor[] ys, Tensor[] xs, Tensor[] grad_ys, String name, Boolean colocate_gradients_with_ops, Boolean gate_gradients, Int32 aggregation_method, Tensor[] stop_gradients, Graph src_graph)
at Tensorflow.Optimizer.compute_gradients(Tensor loss, List1 var_list, Nullable1 aggregation_method, GateGradientType gate_gradients, Boolean colocate_gradients_with_ops, Tensor grad_loss)
at Tensorflow.Optimizer.minimize(Tensor loss, IVariableV1 global_step, List1 var_list, GateGradientType gate_gradients, Nullable1 aggregation_method, Boolean colocate_gradients_with_ops, String name, Tensor grad_loss)
I get this when running in graph mode and creating a minimize operation for AdamOptimizer
The code itself is:
Tensorflow.Train.AdamOptimizer(Config.LearningRate, beta1: Config.Beta1, beta2: Config.Beta2, epsilon: 1e-08f).minimize(CortexInstance.ValueHead_Generator_Loss, var_list: GenVars)
The cost function is:
CortexInstance.ValueHead_Generator_Loss = tf.reduce_mean(tf.pow(tf.exp(tf.abs(LabelsOne - CortexInstance.FCNN_Output_Fake)),tf.constant(2f, TF_DataType.TF_FLOAT, 1)));
tcwicks
@tcwicks
Is it because I'm still using variable scope ?
tf_with(tf.variable_scope("GAN/Generator", reuse: false), delegate
{
tcwicks
@tcwicks
p.s. Note: in https://github.com/SciSharp/SciSharp-Stack-Examples there isn't even a single example of saving a custom model in eager mode.
tcwicks
@tcwicks
gradients for rsqrt are missing. sqrt throws a not implemented exception. Is there any way to implement batch normalization with this at all ?
tcwicks
@tcwicks
Tensorflow.NumPy.ShapeHelper: Line 103 is missing a case statement for Dims of type int. something like the following needs to be added.
            case int[] shape2:
                if (shape.ndim != shape2.Length)
                    return false;
                int[] newDims;
                newDims = new int[shape.dims.Length];
                for (int i = 0; i < shape.dims.Length; i++)
                {
                    newDims[i] = (int)shape.dims[i];
                }
                return Enumerable.SequenceEqual(newDims, shape2);
tcwicks
@tcwicks
Tensorflow.Eager.EagerRunner: Line 81 has a bug if (ops.gradientFunctions[op_name] == null) will always throw an exception if the gradient function is not there Conv2dTranspose is an example cuplir its backprop gradient is not in the list.
Instead perhaps if it was if ((!ops.gradientFunctions.ContainsKey(op_name)) || (ops.gradientFunctions[op_name] == null))
?
Andreas Hogstrand
@AndreasHogstrandUltromics_gitlab
Hi, is there a way of using Tensorflow Serving protobufs with TF.NET?
Craig
@Craigjw
I installed TF.NET in VS 2019 and it seems to work fine. However I can't seem to get it working at all in the interactive C# window, is it possible to get it working at all?
SuperDaveOsbourne
@SuperDaveOsbourne
I just watched the GTC presentation from NVidia and the GPU acceleration for python math and sci librarys was interesting. Is there any development for the same on the SciSharp front?
tcwicks
@tcwicks

@Craigjw @SuperDaveOsbourne @AndreasHogstrandUltromics_gitlab Sadly after over an year of banging my head on the wall I've finally given up and moved across to using TorchSharp. Unfortunately for non trivial models there is some kind of a bug in the C++ build of Tensorflow itself. Which is not there in the Python Version.

What I've found is that regardless of which version you use and I have tried TF1.3, TF 1.15.5, TF 2.3, TF 2.6 The C++ build of Tensorflow has some kind of weird bug in it. I've tried both the nuget packaged versions provided here. As well as compiling from source using bazel. It makes no difference. The bug is "silent corruption of data". What this means is when building non trivial models (More complex than the samples or the unit tests) or in other words deep networks of more than 2 or 3 layers your model will train upto a certain point. After which it will not train anymore. You can try with Adam, ADAGrad, RMSProp or whatever you want. Your model will not train beyond a certain level. From what I can see it is related to floating point precision of the weight updates from gradients.

@Craigjw May I suggest just building it all in a console app. Much easier and a lot less pain. Also don't bother with the Eager mode if your planning on using GPU. Actually don't bother either way. Which unfortunately means that all of Keras is out the window. Here is the reason. When running in eager mode you have to use Gradient Tape. While the Python version of Gradient Tape might be computationally efficient, the Dot net translated version is anything But. Since most of this framework is tested on CPU rather than GPU this issue is hidden and not that obvious. However if your planning on doing any real training you will need to use GPU and lemme put it this way. I'm running a threadripper with 4 X 3090 cards. The Gradient Tape implementation is single threaded and I can barely manage to get one GPU at just over 2% Cuda utilization.
Your alternative is to use Graph mode which does work quite well. This however means Tensorflow Native which is really not that much of a big deal. Most of the operations are fully implemented except in the area of convolutions. Conv2D is implemented and does work. However Conv2dTranspose is not implemented to any functional level. Having said that its also not that much of a big deal cause you can get close to the same result using dilated conv2D followed by a standard dense to expand the shape. I've tested this approach and it does work decently.

@AndreasHogstrandUltromics_gitlab May I suggest using NetMQ for your transport along with MessagePack and MessagePack.Annotations for serialization. I can get near wire speed (1Gbps) serialization of floats from multiple agents to the central Neural Network (Multi Agent reinforcement learning scenario). Note: the Numpy implementation in Tensorflow.Net is extremely performant and blazing fast for converting float shape structures. Much faster than the methods used in TorshSharp so I'm continuing to use the Numpy implementation even through I'm now using TorchSharp as the neural network backend

Craig
@Craigjw
@tcwicks Thank you ever so much for you reply. I'm guessing that you favour using torchsharp with the TF.net implementation of Numpy?