Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sam Hodge
    @samhodge
    I can just reshape the appropriate symbole
    Anirudh Subramanian
    @anirudh2290
    yes, but you need something equivalent to mx.sym.variable in gluon correct ?
    Sam Hodge
    @samhodge
    I think so
    it feels like I am trying to rough out an idea at this stage
    Sam Hodge
    @samhodge
    Actually dont worry about that, I think I need to continue on with a fixed resolution model first and simply try that out with the feedforward in C++ and then build it up from there, I am a little confused about if it is possible to build the gram matrix and the inspiration network in C++, but as a start it might be worth just loading up the model and params that I have for a decent trained model.
    I just worry that I am delaying a decision
    Anirudh Subramanian
    @anirudh2290
    so the link that you pointed to https://github.com/samhodge/incubator-mxnet/blob/master/cpp-package/include/mxnet-cpp/symbol.hpp is just the cpp package, its just another frontend like python.
    Sam Hodge
    @samhodge
    Yup, but C++ is easier to deploy than Python
    Anirudh Subramanian
    @anirudh2290
    i am just wondering why you can not keep weight and height as symbol variables and bind it at runtime.
    width*
    Sam Hodge
    @samhodge
    Sounds great
    but I am not sure if I understand how to do that
    how can you load on the params if the width and height are not known
    so when you do x = mx.sym.var('data')
    y = style_model(x)
    Sam Hodge
    @samhodge
    can you also do some sort of magic like a= mx.sym.var('width'), b=mx.sym.var('height') y=style_model(x,height=a,width=b)
    is that what you mean?
    Anirudh Subramanian
    @anirudh2290
    yes
    Sam Hodge
    @samhodge
    I am honestly a bit of a noob
    Anirudh Subramanian
    @anirudh2290
    i am also a noob with gluon
    Sam Hodge
    @samhodge
    its all good you learn by making mistakes
    let me try this out.
    Anirudh Subramanian
    @anirudh2290
    but with symbolic api you just use placeholders for data, and then bind it to the data at the end, i think this should be possible with gluon too
    Anirudh Subramanian
    @anirudh2290
    apache/incubator-mxnet#6087 you probably need something like this
    Sam Hodge
    @samhodge
    I am hoping I can work out how to get the width and height as symbols to the network
    Sam Hodge
    @samhodge
    I think this relates to my issue
    Sam Hodge
    @samhodge
    @
    Anirudh Subramanian
    @anirudh2290
    @samhodge this should help you
    apache/incubator-mxnet#9893
    Sam Hodge
    @samhodge
    thanks I need to go a few steps back at the moment.
    it seems that the training the model that I am trying to serialise as a symbolic network doesnt work without any modifications
    I will put in a ticket now about this issue
    Sam Hodge
    @samhodge
    apache/incubator-mxnet#9989
    So I would be happy with a fixed resolution for now, but I cannot even get that working.
    Sam Hodge
    @samhodge
    @zhanghang1989 do you have an opinion?
    Lutz Roeder
    @lutzroeder

    Screen Shot 2018-03-07 at 8.23.49 PM.png

    Netron now supports MXNet -symbol.json models. Feedback is welcome.

    Anirudh Subramanian
    @anirudh2290
    @lutzroeder awesome thanks a lot ! i am not sure there are enough people from the community here. i will post this in the slack channel. i know that some people have asked for this in the community.
    ThomasDelteil
    @ThomasDelteil
    Thanks @lutzroeder, really cool, I'll use netron in my next talk to show the model architecture ! Just tried it out with the Crepe model, I am just wondering what makes a convolution show as dilates=(1,) rather than just hiding the dilation factor? (I am assuming dilates=(1,) is the same as no dilation?)
    Lutz Roeder
    @lutzroeder
    @ThomasDelteil The default for dilate is (1,1). Any insights if (1,) is some special encoding are welcome. Currently the app shows the values present in the file. It filters defaults for other formats but they have to be added to the operator file as the app doesn’t depend on the MXNet runtime directly. Feel free to open an issues and will have a look.
    ThomasDelteil
    @ThomasDelteil
    I see, thanks @lutzroeder
    Lutz Roeder
    @lutzroeder
    @ThomasDelteil Added a few heuristics for some common defaults and pushed an update.
    ThomasDelteil
    @ThomasDelteil
    awesome :) I can see the dilate is hidden now :+1:
    nebw
    @nebw
    Hi, I'm trying to compile a mxnet model using NNVM as described here: http://nnvm.tvmlang.org/tutorials/from_mxnet.html#sphx-glr-tutorials-from-mxnet-py Everything works fine, except when I'm trying to compile with cuda as target with batchsize > 1. All NNVM tutorials and examples I could find also only use batchsize 1. The error is: RuntimeError: Batch size: 32 is too large for this schedule (topi/cuda/conv2d_nchw.py", line 527, in schedule_conv2d_nchw)
    Aaron Markham
    @aaronmarkham
    hey, anyone know what happened to data.mxnet.io?
    Arunkumar Venkataramanan
    @ArunkumarRamanan
    TenzTensor Vs MXNet Who is the winner?
    TensorFlow Vs MXNet Who is the winner?
    Alexander Konovalov
    @alexknvl
    depends on the judge
    my roommate had some pretty negative feedback about mxnet recently, which in my experience applies to the majority of such frameworks - non-existent or incomprehensible error reporting