by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Prasanth Pulavarthi
    @prasanthpul
    @dtch1997 you could try https://github.com/microsoft/ell to compile for Cortex M
    @adizhol did you run into a problem with you used PyTorch's exporter for that model?
    ma-hei
    @ma-hei
    A general question: why is it, that onnx and the onnx-runtime are two separate projects on github? I'm surprised to see onnx here https://github.com/onnx and the onnx-runtime here https://github.com/microsoft/onnxruntime. Is it two completely separate groups working on them?
    Prasanth Pulavarthi
    @prasanthpul
    @ma-hei ONNX Runtime supports the full ONNX spec, but currently it is a separate project since the ONNX project is focused on the spec.
    ma-hei
    @ma-hei
    got it, thanks
    ma-hei
    @ma-hei
    @prasanthpul I have another question regarding basic ONNX understanding: Looking at the examples here https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md I find one example that builds a simple ONNX model that consists of just a padding node. What I haven't figured out yet, is if I can run inference on this model. I assumed I could do "sess = onnxruntime.InferenceSession(model_def)" after the model is defined but it throws an error (I expected I could give the model some input and would get the padded output as a result). Maybe my understanding of ONNX is wrong. Basically I see a lot of tutorials where models are built in frameworks such as PyTorch and the model is then exported to ONNX (and then inference is run on the resulting ONNX model). But what I haven't understood yet is if I can technically also build a very simple (possibly parameter free) model only in ONNX, using the operators given by ONNX, and then run inference on that!?
    Guanghui Chen
    @golden0080
    Hello Onnx team! I'm recently trying to work with a Tensorflow model and convert it to Onnx. It will have a CUDA custom ops in TF, do I need to implement that custom op in ONNX as well, to make the tf2onnx conversion successful?
    Guess my next question is that: is there detailed tutorial to implement a new custom ops in ONNX?
    I've searched bunch of places, not seeing very helpful examples.
    Prasanth Pulavarthi
    @prasanthpul
    @ma-hei yes, it should be possible if the model is constructed properly. what error did you get?
    @golden0080 take a look at https://github.com/onnx/tensorflow-onnx#creating-custom-op-mappings-from-python for how to convert TF model with custom ops.
    once you have generated the ONNX file with custom op, you'll need to implement the custom op kernel in your runtime (ONNX Runtime for example)
    Daniel
    @dtch1997
    does onnx.shape_inference.infer_shapes work with quantized models?
    I'm encountering an issue where only the first layer has an inferred shape
    Ashwini Khade
    @askhade
    @dtch1997 : It depends on whether the quantized ops which you are using have a shape inference function defined in ONNX. Can you open an issue in ONNX github repo with more details?
    George Nash
    @georgen117
    I am relatively new to ONNX. I am working on adding new training operators for dnnl provider. I have a few questions: Are there any documents on how to run the Bert and GPT2 training samples? I got lucky with the mnist training sample because I found a post with a link to the model and training data with instructions how to run it. So far I have not been able to find similar instructions for Bert and GPT2 training samples. Does anyone have a link to a readme or can help me know who I could contact to find out how to run those samples
    George Nash
    @georgen117
    Or is there a separate location for posting question regarding ONNXRuntime?
    Prasanth Pulavarthi
    @prasanthpul
    @georgen117 Sounds like you are using ONNX Runtime's training capabilities. You can file issues and questions for ONNX Runtime at https://github.com/microsoft/onnxruntime
    Folks - we are considering moving discussions from Gitter to Slack. Any comments/feedback?
    Svetlana Levitan
    @sveta-levitan
    I like Gitter, and we have so many people here already, will probably lose some if we move to Slack.
    Daniel
    @dtch1997
    @askhade thanks for the response. I have made a Github issue :ere: onnx/onnx#2903.
    Gideon Grinberg
    @Gideon357
    How can I convert a XGBClassifier (xgboost) model to ONNX? I can't find those specific docs. Can anyone help me out? Thanks!
    1 reply
    I believe there was an issue in onnx/sklearn (or some other repo) and the xgboost repo (xgboost/xgboost ????)
    Gideon Grinberg
    @Gideon357
    @onnx
    Prasanth Pulavarthi
    @prasanthpul
    all - please take a look at onnx/onnx#2925 which helps relax some of the opset versioning requirements
    Omar A. Elgendy
    @oelgendy
    Hi,
    I am doing inference with Onnxruntime in C++. I converted the ONNX file into FP16 in Python using onnxmltools convert_float_to_float16. I obtain the fp16 tensor from libtorch tensor, and wrap it in an onnx fp16 tensor using
    g_ort->CreateTensorWithDataAsOrtValue(memory_info, libtorchTensor.data_ptr(), input_tensor_size * 2, input_node_dims.data(), input_node_dims.size(), ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16, &onnxTensor)
    My GPU (NVIDIA GeForce GTX 1660 Ti with Max-Q Design) supports FP16 inference.
    What am I missing? Is the problem in the onnx FP32->onnx FP16 conversion?
    Thanks,
    -Omar
    2 replies
    pvr23
    @pvr23

    I'm trying to

    import onnx

    from a jupyter instance on my Mac and my kernel keeps dying. I have python 3.7x installed and have tf version = 2.2.0 and torch version = 1.5.1 (not sure if I need a specific of either of these... I'm new to this). Has anyone else had an issue with their kernel dying when trying to import onnx?

    Ke Zhang
    @linkerzhang
    Hi ONNX partners, Infra sig is going to have a sig meeting on 8/6/2020 7AM (Beijing Time) / 8/5/2020 4PM (Silicon Valley Time :)). The meeting will mainly talk about the ONNX optimizer moving plan. A meeting request will be sent out and feel free to join it please. @/all
    Jim Spohrer
    @jimspohrer
    ONNX SC (Steering Committee) agenda items here: https://github.com/onnx/steering-committee/blob/master/meeting-notes/20200729.md for Wed July 29 5pm PT (Asia-friendly) next meeting, after that Thur August 6th 9:30am PT (Europe-friendly)
    Daniel
    @dtch1997
    hi there, does anyone know how to use onnx-tf for TF 2.2.0? The onnx-tf CLI exports a GraphProto (frozen graph), but TF 2.2.0 can only load models in a SavedModel format.
    This works, if you use "tf.compat.v1" in place of "tf' everywhere
    Almo Daved
    @CsharpIslife

    Hey everyone,
    Does anyone have experience with converting XGBoost models (are a model) to onnx?
    https://stackoverflow.com/questions/63172475/onnx-model-conversion-with-mutiple-input-types

    Our model needs to support multiple input types.. and we could use some help / tips.

    Thanks!

    bcnv10354
    @bcnv10354
    hello
    any one there ?
    Ke Zhang
    @linkerzhang

    @CsharpIslife the error message is actually confusing. "For operator XGBRegressor (type: XGBRegressor), at most 1 input(s) is(are) supported but we got 2 output(s) which are ['input', 'another_input']"

    I guess the operator "XGBRegressor" only needs one input, but it's feed with 2 ones.

    btw, ONNX does not have an operator named as "XGBRegressor", which should be a customized one you made?

    Ke Zhang
    @linkerzhang

    ONNX Infra Sig Meeting as below,

    ONNX LF AI Zoom 2 邀请您参加预先安排的 Zoom 会议。

    主题:ONNX Infra Sig Meeting - ONNX Optimizer Moving Plan
    时间:2020年8月6日 07:00 上午 北京,上海

    加入 Zoom 会议
    https://zoom.us/j/92530566708

    会议 ID:925 3056 6708
    手机一键拨号
    +12532158782,,92530566708# 美国 (Tacoma)
    +13017158592,,92530566708# 美国 (Germantown)

    根据您的位置拨号
    +1 253 215 8782 美国 (Tacoma)
    +1 301 715 8592 美国 (Germantown)
    +1 312 626 6799 美国 (Chicago)
    +1 346 248 7799 美国 (Houston)
    +1 646 558 8656 美国 (New York)
    +1 669 900 6833 美国 (San Jose)
    855 880 1246 美国 免费
    877 369 0926 美国 免费
    +1 204 272 7920 加拿大
    +1 438 809 7799 加拿大
    +1 587 328 1099 加拿大
    +1 647 374 4685 加拿大
    +1 647 558 0588 加拿大
    +1 778 907 2071 加拿大
    855 703 8985 加拿大 免费
    会议 ID:925 3056 6708
    查找本地号码:https://zoom.us/u/abnctx8jRF

    Sorry that it's Chinese.....
    Ke Zhang
    @linkerzhang

    Sorry for spamming.....

    Hi ONNX partners,

    The ONNX Infra Sig Meeting on 8/6/2020 details are as below,

    ONNX LF AI Zoom 2 is inviting you to a scheduled Zoom meeting.

    Topic: ONNX Infra Sig Meeting - ONNX Optimizer Moving Plan
    Time: Aug 6, 2020 07:00 AM Beijing, Shanghai

    Join Zoom Meeting
    https://zoom.us/j/92530566708

    Meeting ID: 925 3056 6708
    One tap mobile
    +12532158782,,92530566708# US (Tacoma)
    +13017158592,,92530566708# US (Germantown)

    Dial by your location
    +1 253 215 8782 US (Tacoma)
    +1 301 715 8592 US (Germantown)
    +1 312 626 6799 US (Chicago)
    +1 346 248 7799 US (Houston)
    +1 646 558 8656 US (New York)
    +1 669 900 6833 US (San Jose)
    855 880 1246 US Toll-free
    877 369 0926 US Toll-free
    +1 204 272 7920 Canada
    +1 438 809 7799 Canada
    +1 587 328 1099 Canada
    +1 647 374 4685 Canada
    +1 647 558 0588 Canada
    +1 778 907 2071 Canada
    855 703 8985 Canada Toll-free
    Meeting ID: 925 3056 6708
    Find your local number: https://zoom.us/u/abnctx8jRF

    Almo Daved
    @CsharpIslife

    @linkerzhang
    Q: btw, ONNX does not have an operator named as "XGBRegressor", which should be a customized one you made?
    A: We didn't make a custom operator, but this is the exception thrown when converting with onnxmltools.

    I have updated the question on stackoverflow.
    Do you have an example (python statement) where a xgboost/another model is converted with onnxmltools (with multiple TensorTypes).

    Ke Zhang
    @linkerzhang
    @CsharpIslife I don't have such example in hand. You may want to move the question from stackoverflow to https://github.com/onnx/onnxmltools to get some experts' help.
    Almo Daved
    @CsharpIslife
    @linkerzhang Thank you! Just submitted the issue on github with a script to reproduce the error: onnx/onnxmltools#410
    We do understand the concept of how to handle multiple input types, but it's just not clear on how it should be formatted or what the exact issue is.
    That's why I was hoping if anyone could provide us with an example to make sure that we are not doing anything unexpected.
    jojivk
    @jojivk
    Hi All,
    I am facing an issue with onnx-tf when porting a RN50 model from to TF. Instead of generating BatchNormalization, the tool generates a mix ops to substitute that. The pb file generated classify correctly, but the performance is terrible. Same issue is discussed here.
    onnx/onnx-tensorflow#356
    Any help appreciated.
    Abdullah Deliogullari
    @AbdullahDeliogullariCS
    Hello everyone, can you give some information about this error. --> In node -1 (importConv): UNSUPPORTED_NODE: Assertion failed: nbSpatialDims == kernelWeights.shape.nbDims - 2
    Dave Brown
    @ZoroDerVonCodier
    Hi all, I asked this awhile ago but did not see an answer, any way that we can get the MyCaffe AI Framework (https://www.nuget.org/packages?q=MyCaffe) listed as an ONNX supporting framework? We support import and export using the ONNXControl (https://www.nuget.org/packages?q=ONNXControl). Thanks!
    Narasimha Prasanna HN
    @Narasimha1997
    I'm trying to convert Universal Sequence Encoder to ONNX. The original model is from Tensorflow hub. I used tf2onnx to convert the Saved model to ONNX. There is an op known as "Sparse2Dense" which is not supported by ONNX ( tried till opset 12). How can proceed here? I want the model to be properly converted to ONNX. Please let me know how to solve this issue.
    Faith Xu
    @faxu
    @AbdullahDeliogullariCS where are you seeing this error? Could you provide some more context?
    @ZoroDerVonCodier Please follow the instructions here for listing logos: https://github.com/onnx/onnx/blob/master/community/logo_request.md
    Abdullah Deliogullari
    @AbdullahDeliogullariCS
    @faxu I am seeing this error while push weights and bias to Onnx Tensors. The main problem is that I can not push multi-dimentional numpy array which holds weights to onnx tensor.
    Abdullah Deliogullari
    @AbdullahDeliogullariCS
    @faxu tensor = helper.make_tensor("example", TensorProto.FLOAT, [2,2], np.array([[3,3],[6,5]])) when I trying to create a tensor with multidimentional weight, I got this error array([3, 3]) has type <class 'numpy.ndarray'>, but expected one of: numbers.Real
    Ashwini Khade
    @askhade
    @AbdullahDeliogullariCS : This is because the data type and the values do not match... something like helper.make_tensor("example", onnx.TensorProto.FLOAT, [2,2], np.array([[3,3],[6,5]]).reshape(4).tolist()) should fix the issue
    Abdullah Deliogullari
    @AbdullahDeliogullariCS
    np.array([[3,3],[6,5]]).shape gives me (2,2). So I thought that shape and values fix each other. Also [2,2] and [[3,3],[6,5]] gives me error also. Where I am missing??