Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Jim Spohrer
    @jimspohrer
    Have started inviting people to Slack
    7 replies
    Jason
    @jiangjiajun
    Hello everyone, I am wondering how to contribute a new repository at https://github.com/onnx , just like tensorflow-onnx or keras-onnx, is there a guide doc?
    Thomas A. Rieck
    @trieck
    I am not able to use my existing slack account to login to https://lfaifoundation.slack.com/. Anyone else having issues? Thanks.
    3 replies
    I would like to join the ONNX slack channels
    imaquantumcomputer
    @imaquantumcomputer
    I need to implement either sampling without replacement or random shuffling. Does the ONNX opset support either of these?
    Prasanth Pulavarthi
    @prasanthpul
    Jim Spohrer
    @jimspohrer
    To join on LFAI ONNX Slack on your own - please use https://slack.lfai.foundation/
    Jim Spohrer
    @jimspohrer
    Per ONNX Steering Committee request - please migrate to Slack - sign up here: https://slack.lfai.foundation/ (join the Slack Channels you are interested in for general, SIGs, WGs, Roadmap, etc.) - thanks all for your help.
    We will be using Slack to coordinate for the upcoming ONNX community meeting that is being planned - online event.
    Jim Spohrer
    @jimspohrer
    Hold the dates - ONNX Community Meeting Oct 14th and six ONNX Roadmap meetings
    1 reply
    kryptine
    @kryptine
    I found this file in the ONNX repository which shows plans to relicense to Apache-2.0 but I can't find the corresponding GitHub issue or discussion topic. I just want to say that a dual MIT/Apache-2.0 license is better as it allows the most flexibility (patents and compatibility).
    bradappel
    @bradappel
    I downloaded the resnet50-v2-7.onnx network from the ModelZoo. Having executed it in TensorRT, I see that it does not end with a Softmax operation. Is there a way to add this operation to the .onnx network (before it is parsed in tensorRT) using the Onnx Python API?
    Ke Zhang
    @linkerzhang

    @bradappel ONNX does not provide enough API to edit a model, I think.

    ONNXRuntime does have such API to load, edit and save a model.

    Иван Сердюк
    @oceanfish81_twitter
    Any interest in supporting Clang, as a compiler, for ONNX project?
    Prasanth Pulavarthi
    @prasanthpul
    @oceanfish81_twitter please file this as a discussion on github: https://github.com/onnx/onnx/discussions
    padmasreenagarajan
    @padmasreenagarajan
    when converting .pth to .onnx of yolov3, im facing tracer warnings in models.py.
    Could anyone help me out?
    Null Pointer Exception
    @_boring_af_twitter
    Hey guys. I'm trying to convert pretrained gpt2 model to onnx format. The coversion gones well via script onnxruntime_tools/transformers/convert_to_onnx.py. If I use -p fp32 while conversion, the inference is fine. But if i change it to fp16, gpt starts to output non-sense like it would be if I use wrong tokenizer. What might be a problem? My GPU is Tesla V100 16GB.
    Prasanth Pulavarthi
    @prasanthpul
    @_boring_af_twitter since this is about onnxruntime and onnxruntime_tools, please ask this in the ONNX Runtime discussions (https://github.com/microsoft/onnxruntime/discussions) so the right team can help you.
    @padmasreenagarajan since this is about PyTorch's exporter, please raise this in the PyTorch github
    Иван Сердюк
    @oceanfish81_twitter

    Hold the dates - ONNX Community Meeting Oct 14th and six ONNX Roadmap meetings

    @jimspohrer , is there any quota for additional agenda's lines?
    Say, we could discuss Clang's support (and other LLVM related topics)

    3 replies
    @kentonv , added you here - so you could express your vision regarding ONNX's protobuf usage and compiler support
    Jim Spohrer
    @jimspohrer
    ONNX Roadmap (2 of 6) starting later today, Asia Friendly-Time in 10 hours, 5:30pm Pacific time
    Please join LF AI ONNX Slack for more information - sign up here: https://slack.lfai.foundation/ - then browse to join #onnx-roadmap slack channel
    Bhushan Sonawane
    @bhushan23
    Thread for continuing discussion on reference implementation, few thoughts:
    5 replies
    Jim Spohrer
    @jimspohrer
    The ONNX Steering Committee has requested everyone to sign up for a LF AI slack account - and use the ONNX slack channels there as the primary communications medium. You can sign up for LF AI Slack at this link: https://slack.lfai.foundation/
    Please let me know if you have any questions
    Jim Spohrer
    @jimspohrer
    For technical issues with ONNX - use Github issues for discussions - https://github.com/onnx/onnx/issues
    On LF AI slack - you can click on "Channels + Browse" to find: #onnx-general, #onnx-archinfra, #onnx-coverters, #onnx-modelzoo, #onnx-release, #onnx-roadmap and other slack channels
    If you need any help, just ask.
    The ONNX SC would like to get the community migrated to Slack before the next community meeting/workshop
    Thanks for your help.
    Jim Spohrer
    @jimspohrer
    ONNX Community Meeting Oct 14th - looking for community presentations - let us (ONNX Steering Committee Members) know if you have interest in presenting
    Harry Kim
    @harryskim
    As Jim mentioned, we are migrating Gitter to Slack. Please sign up for LF AI slack using https://slack.lfai.foundation/ and join "onnx-general" channel.
    Prasanth Pulavarthi
    @prasanthpul
    posting in all channels:
    ONNX will be switching to the Apache 2.0 license by the end of this month. Instead of signing a CLA, contributors need to provide a DCO (developer certificate of origin). This is done by including a sign-off-by line in commit messages. Using the “-s” flag for “git commit” will automatically append this line. For example, running “git commit -s -m ‘commit info.’” it will produce a commit that has the message “commit info. Signed-off-by: First Last email@company.com”. The DCO bot will ensure commits are signed with an email address that matches the commit author before they are eligible to be merged.
    2 replies
    padmasreenagarajan
    @padmasreenagarajan

    Hi..Im trying to convert Mobilenetv3 onnx model to tensorflow.
    I need tensorflow==1.12.0 for my inferencing.
    I searched all over to find the onnx and onnx-tf versions corresponding to tensorflow==1.12.0

    Could you please suggest me the compatible versions of onnx,onnx-tf and torch for my required tensorflow version?

    Prasanth Pulavarthi
    @prasanthpul
    @padmasreenagarajan for technical questions, please file an issue in github. since you are converting ONNX to TF instead of inferencing the ONNX model directly, you can file in https://github.com/onnx/onnx-tensorflow
    padmasreenagarajan
    @padmasreenagarajan
    Okay..I have opened an issue under onnx-tensorflow which can be found in the below link
    onnx/onnx-tensorflow#764
    @prasanthpul Do u have any solution for this?
    Jim Spohrer
    @jimspohrer
    Reminder: Next ONNX Community Meeting/Workshop online Oct 14, 7-10am PT. Got an ONNX use case to present? Please let the ONNX Steering Committee know - and get on the agenda. Join LF AI Slack and onnx-general channel for updates and more information - https://slack.lfai.foundation
    Prasanth Pulavarthi
    @prasanthpul
    @padmasreenagarajan i don't. most of us use ONNX Runtime (https://onnxruntime.ai) for inferencing
    Jim Spohrer
    @jimspohrer
    Reminder: Join Slack to be part of ONNX SIG & Working Group, Steering Committee, Release, and General discussion - many have moved already thank-you! Sign-up here: https://slack.lfai.foundation - then go to channel + browse, and add slack channel
    padmasreenagarajan
    @padmasreenagarajan
    Hi all!
    padmasreenagarajan
    @padmasreenagarajan
    For inferencing Mobilenetv3, I am using TIDL by Texas Instruments.
    h-swish is an activation function which is not supported by TIDL import tool.
    So, Could anyone suggest an alternative to "h-swish"?
    Or Is there any possibility to add "h-swish" operator in onnx operator set?
    11 replies
    Jim Spohrer
    @jimspohrer
    @ShuangLiu may have pointers - Tencent/ncnn#1402
    1 reply
    ONNX Community Meeting Workshop event website and registration are open - see onnx-general slack channel for more details. Join ONNX Slack via https://slack.lfai.foundation - then go to channel + browse, and add slack channel
    Ofir Zafrir
    @ofirzaf
    Hi, i am interested in onnx runtime quantized inference and i can't find detailed documentation about the existing options with quantization and how can i know what kernel will actully be running after converting the model using the onnxrun time quantization api
    1 reply
    Tiberio
    @tiberiusferreira
    Hello everyone!
    Screen Shot 2020-09-27 at 22.32.24.png
    I'm trying to create a simple model and train it using onnx runtime, but I can't find an equivalent of sess = onnxruntime.InferenceSession(path_save) for training.
    2 replies