Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Prasanth Pulavarthi
    @padmasreenagarajan i don't. most of us use ONNX Runtime (https://onnxruntime.ai) for inferencing
    Jim Spohrer
    Reminder: Join Slack to be part of ONNX SIG & Working Group, Steering Committee, Release, and General discussion - many have moved already thank-you! Sign-up here: https://slack.lfai.foundation - then go to channel + browse, and add slack channel
    Hi all!
    For inferencing Mobilenetv3, I am using TIDL by Texas Instruments.
    h-swish is an activation function which is not supported by TIDL import tool.
    So, Could anyone suggest an alternative to "h-swish"?
    Or Is there any possibility to add "h-swish" operator in onnx operator set?
    11 replies
    Jim Spohrer
    @ShuangLiu may have pointers - Tencent/ncnn#1402
    1 reply
    ONNX Community Meeting Workshop event website and registration are open - see onnx-general slack channel for more details. Join ONNX Slack via https://slack.lfai.foundation - then go to channel + browse, and add slack channel
    Ofir Zafrir
    Hi, i am interested in onnx runtime quantized inference and i can't find detailed documentation about the existing options with quantization and how can i know what kernel will actully be running after converting the model using the onnxrun time quantization api
    1 reply
    Hello everyone!
    Screen Shot 2020-09-27 at 22.32.24.png
    I'm trying to create a simple model and train it using onnx runtime, but I can't find an equivalent of sess = onnxruntime.InferenceSession(path_save) for training.
    2 replies
    I guess https://github.com/microsoft/onnxruntime/blob/d9ecc0cebf8752893d4cd4e547341e390dc29b01/orttraining/orttraining/python/ort_trainer.py#L542 is what I'm looking for? I thought there would be something similar to onnxruntime.InferenceSession .
    1 reply
    Dimitra Karatza
    Hello everyone, I am using the Tensorflow backend for onnx, to convert an onnx model to tensorflow. However, after the conversion, the graph of the converted model is broken.
    Is there any way to fix this?
    1 reply
    Alex Garustovich
    Hi chat, I've made a PR to onnx/onnx, but it's not reviewed. How can I drag attention to it? Thanks!
    1 reply
    This one: onnx/onnx#3036
    Jim Spohrer
    The ONNX Steering Committee is excited about the program for next weeks community meeting : https://events.linuxfoundation.org/lf-ai-day-onnx-community-virtual-meetup-fall/program/schedule/
    ONNX Community Meeting Fall 2020 - Wednesday Oct 14 10-1pm ET - Register Here: https://events.linuxfoundation.org/lf-ai-day-onnx-community-virtual-meetup-fall/register/
    Jim Spohrer
    @harryskim > As Jim mentioned, we are migrating Gitter to Slack. Please sign up for LF AI slack using https://slack.lfai.foundation/ and join "onnx-general" channel.
    Ke Zhang
    @Yukigaru both gitter (here) and slack are good places to drag such attention :).
    Brian Chen
    Quick question: what's the best place to ask spec/implementation-related questions? I tried onnx-general on Slack, but it seems to be primarily announcements/development talk...
    2 replies
    While I'm at it, what's the difference between onnx and onnx-general?
    PyTorch => ONNX This is how I've converted it. Then I want to convert it to "tflite", is there any way to do that?
    I think I need to convert "channel_first" to "channel_last", but I don't know how.
    Does "onnx-tensorflow" internally convert "NCHW" to "NHWC"? GitHub:https://github.com/onnx/onnx-tensorflow/blob/master/example/onnx_to_tf.py
    Chun-Wei Chen
    Hi ONNX community,
    TestPyPI Packages for ONNX 1.8.0 are available now: https://test.pypi.org/project/onnx/1.8.0rc0/#files (All versions) Please let us know if there is any problem with your usage and finish the verification by the end of this week. Thank you!
    jiang jianjun
    I write a lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
    using onnx model for pure c99 inference
    and support hardware acceleration interface
    if you interresting, please star! thanks
    the licenese is MIT
    Hi @SelComputas was wondering if youve found a solution to creating heatmaps/pca/explainability to onnx models?
    Chun-Wei Chen
    Hi ONNX community,
    I am happy to announce that ONNX 1.8.0 has been released. https://github.com/onnx/onnx/releases/tag/v1.8.0
    PyPI packages are available here: https://pypi.org/project/onnx/ Conda packages will also be available soon. Thank you everyone.
    How does ONNX connect to data sources to process a model with?
    Prasanth Pulavarthi
    If you use ONNX Runtime or have tried it out before, you can provide feedback to the team via this brief survey: https://aka.ms/ort-survey
    Deepak Chauhan
    Hi, need help with this one microsoft/onnxruntime#5834
    Tom Roderick
    Hello all! Just found onnx yesterday. Love the effort.
    1 reply
    Hi everybody, when converting
    my code from kreas to onnx I get this error "'tuple' object has no attribute 'layer'" do you know why?
    is anybody here ?
    anyone here?
    anyone here?
    i need help pls... i am really desesperated
    Adam Pocock
    Most discussion has moved to the slack channel, or you can use GitHub issues or discussions on the relevant projects.
    Raimondo Marino
    Hello guys I'm a newbie to ONNX. Is it possibile to export a fine-tuned OpenAIGPTDoubleHeadsModel? Since I believe it is not supported. Am I wrong? thanks
    Hi guys pls help to resolve this error :2021-01-28 18:18:38.4236867 [E:onnxruntime:, inference_session.cc:1268 onnxruntime::InferenceSession::Initialize::<lambda_c608601079ff6a1804107b16babd2631>::operator ()] Exception during
    initialization: D:\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall D:\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:117 onnxruntime::CudaCall CUDA failure 2:
    out of memory ; GPU=0 ; hostname=LAPTOP-6EUODJJ7 ; expr=cudaMalloc((void**)&p, size);
    Aakash kaushik
    HI I am new here and wanted to add ONNX support to mlpack and i am completely unaware of the things that i might need to do can someone help/guide me for this ?