This room is deprecated - please use #onnx-general channel on https://slack.lfai.foundation/
@prasanthpul Do u have any solution for this?
Reminder: Next ONNX Community Meeting/Workshop online Oct 14, 7-10am PT. Got an ONNX use case to present? Please let the ONNX Steering Committee know - and get on the agenda. Join LF AI Slack and onnx-general channel for updates and more information - https://slack.lfai.foundation
Reminder: Join Slack to be part of ONNX SIG & Working Group, Steering Committee, Release, and General discussion - many have moved already thank-you! Sign-up here: https://slack.lfai.foundation - then go to channel + browse, and add slack channel
For inferencing Mobilenetv3, I am using TIDL by Texas Instruments. h-swish is an activation function which is not supported by TIDL import tool. So, Could anyone suggest an alternative to "h-swish"? Or Is there any possibility to add "h-swish" operator in onnx operator set?
ONNX Community Meeting Workshop event website and registration are open - see onnx-general slack channel for more details. Join ONNX Slack via https://slack.lfai.foundation - then go to channel + browse, and add slack channel
Hi, i am interested in onnx runtime quantized inference and i can't find detailed documentation about the existing options with quantization and how can i know what kernel will actully be running after converting the model using the onnxrun time quantization api
I'm trying to create a simple model and train it using onnx runtime, but I can't find an equivalent of sess = onnxruntime.InferenceSession(path_save) for training.
Hi ONNX community, TestPyPI Packages for ONNX 1.8.0 are available now: https://test.pypi.org/project/onnx/1.8.0rc0/#files (All versions) Please let us know if there is any problem with your usage and finish the verification by the end of this week. Thank you!
I write a lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.