when using TorchSharp, we encountered a problem which looks like memory corruption (a TorchTensor appears to change its values to junk values). It looks like at least one instance was caused by initializing a tensor with an array which later went out of scope in C# and was freez by GC.
Is this a bug, or is the user responsible for keeping the source array in scope?
Making sure that the array is never freed by the garbage collection solved the problem at a first glance. But later we encountered a similar behavior where none of the data can be freed unintentionally. Nevertheless, at some point our tensor data is overwritten. Is there anything else we should take care of when using TorchSharp to avoid memory corruption? Did anybody encounter similar problems?
Thank you very much for your help!
Hi all. Sorry for trivial question, but I am trying to run Torch.NET from FSharp interactive (Visual Studio 2019) and have failed. I got the following exception: Unable to load DLL 'python37': The specified module could not be found. I have added the fodler where the compiler is to the environment variables, have installed Conda (possibly useless). What else am I missing? Of course I have added references of Microsoft.CSharp.dll, Numpy.Bare.dll, Python.Runtime.dll, Torch.Net.dll.
Is there a step-by-step guide on how to start with Torch.NET?
Thanks lot for any help
@interesaaat Thanks for your response. Sorry I confused the two projects. I would rather prefer a managed library: so I am getting into TorchSharp. It seems functional so far from a console application; I am having troubles, though, in running from F# interactive. Specifically I got this exception:
System.DllNotFoundException: Unable to load DLL 'LibTorchSharp': The specified module could not be found. (Exception from HRESULT: 0x8007007E)
at TorchSharp.Torch.THSTorch_seed(Int64 seed)
at <StartupCode$FSI_0018>.$FSI_0018.main@() in
Has anyone had any luck with TorchSharp through F# interactive? THanks
I am interested in passing data already in the GPU into a DL network with the output still residing in the GPU to be passed to the next element in a pipeline. If possible I’d like to do this without having to perform CPU/GPU copies. Is this possible at the moment? In the end it’s a bit tricky as it would require a conversion from a cuda data type to an torch tensor.
I think this concept can work in Python using PyCuda and PyTorch: https://discuss.pytorch.org/t/interop-between-pycuda-and-pytorch/588
However, I am not sure if this would currently work in C# using TorchSharp.
Does Torchsharp have access to the following C++ function?: https://pytorch.org/cppdocs/api/function_namespacetorch_1ad7fb2a7759ef8c9443b489ddde494787.html
I think this is ultimately what I am looking for to take data already in the GPU and use it as a tensor input for a DL model
When use the latest repo, I couldn't call the Forward function after load a model trained by pytorch, How should I do ?
Hmmm this should work. Loading should be more accurate now. Please do add an issue.
I wonder why torch.Jit were commented out, does this mean torch.Jit is not implemented yet or it is replaced by other function? Thank a lot !
There was a lot of change in the C++ API for torch.JIT
We are looking at code generating more of the API.
Please do add an issue in the repo about this.