Reminder: Join Slack to be part of ONNX SIG & Working Group, Steering Committee, Release, and General discussion - many have moved already thank-you! Sign-up here: https://slack.lfai.foundation - then go to channel + browse, and add slack channel
For inferencing Mobilenetv3, I am using TIDL by Texas Instruments. h-swish is an activation function which is not supported by TIDL import tool. So, Could anyone suggest an alternative to "h-swish"? Or Is there any possibility to add "h-swish" operator in onnx operator set?
ONNX Community Meeting Workshop event website and registration are open - see onnx-general slack channel for more details. Join ONNX Slack via https://slack.lfai.foundation - then go to channel + browse, and add slack channel
Hi, i am interested in onnx runtime quantized inference and i can't find detailed documentation about the existing options with quantization and how can i know what kernel will actully be running after converting the model using the onnxrun time quantization api
I'm trying to create a simple model and train it using onnx runtime, but I can't find an equivalent of sess = onnxruntime.InferenceSession(path_save) for training.
Hi ONNX community, TestPyPI Packages for ONNX 1.8.0 are available now: https://test.pypi.org/project/onnx/1.8.0rc0/#files (All versions) Please let us know if there is any problem with your usage and finish the verification by the end of this week. Thank you!
I write a lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
Hello all! Just found onnx yesterday. Love the effort.
Hi everybody, when converting my code from kreas to onnx I get this error "'tuple' object has no attribute 'layer'" do you know why?
is anybody here ?
i need help pls... i am really desesperated
Most discussion has moved to the slack channel, or you can use GitHub issues or discussions on the relevant projects.
Hello guys I'm a newbie to ONNX. Is it possibile to export a fine-tuned OpenAIGPTDoubleHeadsModel? Since I believe it is not supported. Am I wrong? thanks
Hi guys pls help to resolve this error :2021-01-28 18:18:38.4236867 [E:onnxruntime:, inference_session.cc:1268 onnxruntime::InferenceSession::Initialize::<lambda_c608601079ff6a1804107b16babd2631>::operator ()] Exception during initialization: D:\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:117 onnxruntime::CudaCall CUDA failure 2: out of memory ; GPU=0 ; hostname=LAPTOP-6EUODJJ7 ; expr=cudaMalloc((void**)&p, size);
HI I am new here and wanted to add ONNX support to mlpack and i am completely unaware of the things that i might need to do can someone help/guide me for this ?