Hello, I'm trying to write a very simple C++ application that uses the C++ onnxruntime API to read in an onnx model and perform batch inference. I'm using a resnet model from the model zoo to test my code. At this point, I don't care about the input data or output - I'm generating random values for the input. The examples in the onnxruntime repo are quite lacking. The imagenet example is the only example that shows batch processing and it is so convoluted with pointer dereferences and operator overloads everywhere that I can barely follow what 's going on. I've been using sample code from microsoft/onnxruntime#2757 issue and have gotten stuck on making the first dimension symbolic.
My question: does the onnxruntime C++ API provide a way to check and add a symbolic dimension - or is that task best done using the onnx library in C++ first?
Any help is appreciated. I'm planning to contribute a few sample applications to the repo as I work through these problems too.
Hello to ONNX community!
Not sure this is the best place to do so, but I still would love to showcase a project I started two days ago, it's called Sardonyx. https://github.com/s1ddok/Sardonyx
Currently this is a pure-Swift converter that generates Swift 4 TensorFlow models (single data blob + code for both data parsing and inference) out of ONNX files. It already works on models like VGG19 or MobileNetV2 and support few layers.
I'm going to add Metal Performance Shader support in the near future. I think this tool can also eventually evolve in ONNX-to-Pytorch, ONNX-to-TensorFlow, ONNX-to-BNNS etc tool as well.
If you have any feedback for the project, I would love to hear that! Thanks!