by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    G. Ramalingam
    @gramalingam
    @dodler : A SparseTensorProto has a “TensorProto values” which has a name. Won’t it be enough to use the name in the “values” field? Do we need a separate name?
    Ke Zhang
    @linkerzhang
    @gramalingam Fair enough. a separate name is not needed here, though one line of comments may be added inplace to clarify this a little bit.
    kohei0418
    @kohei0418
    Hi guys! I'm trying to convert some tensorflow models into ONNX. Is there any operation in ONNX equivalent to tensorflow's SegmentSum (or SegmentMean)?
    Mike Smith
    @gomlfx
    hi
    Nikolay Renziglov
    @Mabanza_gitlab
    Hi there. Could you guys direct me on the following: I want to create and train a microbiological model for slipper animalcule. I have a set of images kept locally. What do I do first and then? Thanks.
    Svetlana Levitan
    @sveta-levitan
    Hi Nikolay,
    To train a model you first need to use a deep learning framework such as PyTorch or TensorFlow. Then you can export the model in ONNX and deploy into ONNX Runtime or other framework if you want. ONNX does not yet provide a full mechanism for training models.
    KaranSwatch
    @KaranSwatch
    Im trying to convert .pth file into onnx, can anyone help me? Im new to machine learning
    Rogier
    @sluijs
    Artyom
    @dodler
    @gramalingam Well I need sparse tensors to compress model with lots of zeros(actually I got about 10x reduction using pytorch sparse tensors). I think that name for sparse tensor is needed if one wants to store neural network weights in sparse format to save space (in mobile application for example). However I don't know the exact mechanics of storing neural networks in onnx
    Also, are there any tools to compress onnx model? At least when it stored in fs.
    Sindre Eik de Lange
    @SelComputas
    Hi, I have a .onnx model and would like to create some explainability functions for it, such as heatmaps, anchoring, etc., and was wondering if anybody here had any experience with this? One option is to "port" the model to PyTorch or Keras, but I don't know the name of the architecture, so it's hard to replicate it in a different framework - is there any way to port it without having this information?
    Svetlana Levitan
    @sveta-levitan
    @SelComputas Maybe you can use an ONNX to TensorFlow converter? Or write some code that would generate various perturbed inputs and use results of ONNX inference to build the output you need.
    2 replies
    Dave Brown
    @ZoroDerVonCodier
    We just released a new drop of the MyCaffe AI Framework with support for ONNX to both import and export *.onnx files (see https://github.com/MyCaffe/MyCaffe/releases and https://www.signalpop.com/blog/ for details). Any suggestions on how we can have our framework listed on the page http://onnx.ai/supported-tools.html#buildModel? Our framework icon is located at https://github.com/MyCaffe/MyCaffe/blob/master/MyCaffe/MainIcon.ico.
    Artyom
    @dodler

    Hi everyone,

    is quantization aware training supported in onnx?

    Thanks.

    Jim Spohrer
    @jimspohrer
    @dodler - not that I know, but it is a good direction for sure. Just re-reading this paper - so I know quantization is of big interest to onnx community building tools - https://arxiv.org/pdf/1908.05858.pdf
    Svetlana Levitan
    @sveta-levitan
    Hi Artyom @dodler,
    Thank you for the good question. ONNX training is normally discussed in the "training" gitter. You are welcome to join our next ONNX Training WG meeting on Tuesday at 10:30 am PDT, using Zoom https://zoom.us/j/7376656864
    William Luke
    @williamluke4
    Is there anywhere that I can find docs on the contents of a .onnx file.
    Dave Brown
    @ZoroDerVonCodier
    @williamluke4 check out https://github.com/onnx/onnx/blob/master/onnx/onnx.proto which defines the contents.
    William Luke
    @williamluke4
    @ZoroDerVonCodier Thanks
    Josh Bradley
    @jgbradley1

    Hello, I'm trying to write a very simple C++ application that uses the C++ onnxruntime API to read in an onnx model and perform batch inference. I'm using a resnet model from the model zoo to test my code. At this point, I don't care about the input data or output - I'm generating random values for the input. The examples in the onnxruntime repo are quite lacking. The imagenet example is the only example that shows batch processing and it is so convoluted with pointer dereferences and operator overloads everywhere that I can barely follow what 's going on. I've been using sample code from microsoft/onnxruntime#2757 issue and have gotten stuck on making the first dimension symbolic.

    My question: does the onnxruntime C++ API provide a way to check and add a symbolic dimension - or is that task best done using the onnx library in C++ first?

    Any help is appreciated. I'm planning to contribute a few sample applications to the repo as I work through these problems too.

    Josh Bradley
    @jgbradley1
    I was surprised to discover that the resnet models in the model zoo don't already have a symbolic first dimension.
    alexanderwatanabe
    @alexanderwatanabe
    sorry if this is a simple question. I am trying to get a pytorch model into AWS lambda. Due to size restrictions around AWS lambda's layers, it seems easier to export my model to onnx and use the onnxruntime to run inference on this model.
    I've successfully exported my model to onnx, passed the model checker and am able to run inference on it using onnxruntime.InferenceSession as described in this documentation (https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html)
    However the example given relies on torchvision to transform a PIL image into a tensor, and the whole point of this exercise was to avoid PyTorch/torchvision in my inference deployment... is there a better way to shape/prepare an image and pass it to an onnxruntime.InferenceSession?
    1 reply
    of course i think i find what i need after asking here in the 3rd code cell of this notebook: https://github.com/onnx/tutorials/blob/master/tutorials/OnnxRuntimeServerSSDModel.ipynb
    Jaka
    @katrasnikj
    Hi, does some know how to convert a float32 ONNX model to float16 format in order to speed up inference? Is there converter somewhere in the onnx or onnxruntime repo? Thank you
    Jaka
    @katrasnikj
    Jaka
    @katrasnikj
    Doesn't work
    Prasanth Pulavarthi
    @prasanthpul
    Hardik Ajmani
    @hardik.ajmani_gitlab
    Hey, we have a Tensorflow model with some CNN and some LSTM layers.
    Although the ONNX conversion works fine, but when the parsed model is executed with TensorRT OSS 7.0.0 it throws the following error.
    '
    In function parseGraph:
    [8] No importer registered for op: really ?If
    [06/23/2020-11:16:39] [E] Failed to parse onnx file
    [06/23/2020-11:16:39] [E] Parsing model failed
    [06/23/2020-11:16:39] [E] Engine creation failed
    [06/23/2020-11:16:39] [E] Engine set up failed
    '
    Are there any parameters to be changed while ONNX parsing, or any other recommendations for solving this issue?
    Any help would be appreciated.
    Charles Daniels
    @charlesdaniels
    Hi all, is there any documentation available for the ONNX Python API? I don't see it linked anywhere on the website or the GitHub repository.
    3 replies
    Svetlana Levitan
    @sveta-levitan
    @charlesdaniels There are many tutorials at https://github.com/onnx/tutorials, they probably help to understand the API. Also see https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md
    Charles Daniels
    @charlesdaniels
    @sveta-levitan I did see those, they weren't quite what I was after. What I'm specifically after is docs for whatever object load_model() returns, and the fields in it. I've been poking around with dir(), but I have to guess what everything means.
    My use case is that I want to access the data stored in the model directly so I can import ONNX files into a program I am writing.
    Andrey Volodin
    @s1ddok

    Hello to ONNX community!

    Not sure this is the best place to do so, but I still would love to showcase a project I started two days ago, it's called Sardonyx. https://github.com/s1ddok/Sardonyx

    Currently this is a pure-Swift converter that generates Swift 4 TensorFlow models (single data blob + code for both data parsing and inference) out of ONNX files. It already works on models like VGG19 or MobileNetV2 and support few layers.

    I'm going to add Metal Performance Shader support in the near future. I think this tool can also eventually evolve in ONNX-to-Pytorch, ONNX-to-TensorFlow, ONNX-to-BNNS etc tool as well.

    If you have any feedback for the project, I would love to hear that! Thanks!

    Ke Zhang
    @linkerzhang
    Interesting... ONNX should have been natively supported by Pytorch. Well, your tool may be a good thing to have TF be able to run ONNX models (along with ONNX being updated version by version).
    btw, there's also a onnx-to-tf effort made by IBM friends before, and you may take a look at that too.
    btw, is there any reason on why generating SWIFT codes instead of generating any IR layers in TF?
    kzhou003
    @kzhou003
    Hello everyone
    adizhol
    @adizhol
    Hi,
    has anyone tried exporting DeformConvFunction layer (dcn) of maskrcnn-benchmark to onnx?
    Svetlana Levitan
    @sveta-levitan
    Who is in charge of onnx.ai web page? It still says "IBM will be hosting the next ONNX Community meeting on April 9th. " We are in July now! Thank you!
    Daniel
    @dtch1997
    hello! I'm interested in running ONNX-compliant models on an ARM Cortex-M7 chip. What would be the best way to do this?
    Prasanth Pulavarthi
    @prasanthpul
    @hardik.ajmani_gitlab make sure you are using the right opset when exporting the TF model for TensorRT. You can also try using ONNX Runtime which integrates with TRT and provides full ONNX support.
    @sveta-levitan the webpage is managed from the https://github.com/onnx/onnx.github.io repo. i think you are referring to the news section which shows the archive history of news stories. the latets news story shown is the new steering committee. back in april, the story about the workshop was highlighted. we generally don't go back and update new sstories.
    Prasanth Pulavarthi
    @prasanthpul
    @dtch1997 you could try https://github.com/microsoft/ell to compile for Cortex M
    @adizhol did you run into a problem with you used PyTorch's exporter for that model?
    ma-hei
    @ma-hei
    A general question: why is it, that onnx and the onnx-runtime are two separate projects on github? I'm surprised to see onnx here https://github.com/onnx and the onnx-runtime here https://github.com/microsoft/onnxruntime. Is it two completely separate groups working on them?
    Prasanth Pulavarthi
    @prasanthpul
    @ma-hei ONNX Runtime supports the full ONNX spec, but currently it is a separate project since the ONNX project is focused on the spec.
    ma-hei
    @ma-hei
    got it, thanks
    ma-hei
    @ma-hei
    @prasanthpul I have another question regarding basic ONNX understanding: Looking at the examples here https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md I find one example that builds a simple ONNX model that consists of just a padding node. What I haven't figured out yet, is if I can run inference on this model. I assumed I could do "sess = onnxruntime.InferenceSession(model_def)" after the model is defined but it throws an error (I expected I could give the model some input and would get the padded output as a result). Maybe my understanding of ONNX is wrong. Basically I see a lot of tutorials where models are built in frameworks such as PyTorch and the model is then exported to ONNX (and then inference is run on the resulting ONNX model). But what I haven't understood yet is if I can technically also build a very simple (possibly parameter free) model only in ONNX, using the operators given by ONNX, and then run inference on that!?
    Guanghui Chen
    @golden0080
    Hello Onnx team! I'm recently trying to work with a Tensorflow model and convert it to Onnx. It will have a CUDA custom ops in TF, do I need to implement that custom op in ONNX as well, to make the tf2onnx conversion successful?
    Guess my next question is that: is there detailed tutorial to implement a new custom ops in ONNX?
    I've searched bunch of places, not seeing very helpful examples.