Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Alex Wiltschko
    @alexbw
    Okee dokee, maybe this is useful, maybe it's not
    Ke Tran
    @ketranm
    Does anyone manage to run train-penn-lstm.lua, I got the following error
    /torch/install/bin/luajit: [string "return function(locals, rlocals, vlocals, Log..."]:528: attempt to call upvalue 'torch_neg' (a nil value)
    stack traceback:
        [string "return function(locals, rlocals, vlocals, Log..."]:528: in function 'df'
        train-penn-lstm.lua:183: in main chunk
        [C]: in function 'dofile'
        ...tran/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x004064f0
    Yong Fei
    @yongfei25
    Hi, is torch.cat for CUDA tensor supported?
    Yong Fei
    @yongfei25
    alright.. for array only double tensor is supported.
    Reed
    @read-mind
    @ketranm I have the same issue, did you figure it out?
    Clement Farabet
    @clementfarabet
    @ketranm @read-mind :neg() was introduced in torch at commit 2546dc6374b1f086907368000d4c9049771f790a
    you prob have an earlier version installed
    try bumping it: luarocks install torch
    @yongfei25 torch.cat doesn't support auto typing yet, but you can use: torch.FloatTensor.cat to get the float version
    and torch.CudaTensor.cat
    Andreas Köpf
    @andreaskoepf
    Hi guys, I played a little bit with autograd tonight and it is really a fantastic tool. But while some fairly complex nn related code actually works I quite easily ran against a wall with rather simple things as e.g. twitter/torch-autograd#77 ... Maybe state of the project is not yet mature enough to to treat it as a opaque black box?
    luketwitter
    @luketwitter
    @andreaskoepf there are many cases that don't work yet, and the errors provided are quite bad unfortunately, so it's hard to know what you're doing wrong
    in your case, it's the x1 = assignment that isn't supported - we don't allow mutating existing tensors
    Andreas Köpf
    @andreaskoepf
    @luketwitter Ok, but I guess you guys are working on it? There seems to be a test called NarrowCopy which does seem to do what I am looking for.
    luketwitter
    @luketwitter
    yeah, we're working on improving errors
    if you can write your example the same way the NarrowCopy test is written, it should work
    Andreas Köpf
    @andreaskoepf
    @luketwitter nice, then it seems only the index metatable-method of tensor should trigger exactly what torch.select() does currently .. I will experiment with it a bit.
    luketwitter
    @luketwitter
    right, it's just another corner case we haven't gotten to. the main issue is we need to make it clear when you've gone beyond the supported set of operations
    instead of letting it fail later with cryptic errors
    Andreas Köpf
    @andreaskoepf
    @luketwitter yes, for example: res = torch.zero(inputs.x.new(torch.size(inputs.x))) works but res = torch.zeros(torch.size(input.x)) gives a support.lua:30: attempt to index local 'A' (a nil value) ...
    Alex Wiltschko
    @alexbw
    @andreaskoepf Thanks for hammering on the project, this is exactly what we need to mature it quickly. The best (for me at least) is for you to open issues when you see odd edge-case behavior, and we'll try to fix it as we can
    alex-weaver
    @alex-weaver
    hi, hopefully a simple question: is reshape supported?
    I'm guessing not, it doesn't seem to have an entry in gradfuns.lua (unless this is the wrong place to be looking?)
    vislab2013
    @vislab2013
    anyone else having troubles running the example code? for example train-mnist-cnn.lua
    vislab2013
    @vislab2013
    nvm, i had to update my torch installation and got it to work.
    Alex Wiltschko
    @alexbw
    @alex-weaver looks like we don't have a grad for reshape, but not hard to add it. Open an issue?
    Alex Wiltschko
    @alexbw
    actually @alex-weaver hold off, I implemented it, but in a separate branch. Got some new features coming soon, and I'll merge it once they're done.
    Yong Fei
    @yongfei25
    Hi, it seems that wrapping nn modules inside gradient function is no longer supported after recent commits.
    neuralNet = function(params, x, y, act)
       local relu = grad.nn.ReLU()
    end
    dneuralNet = grad(neuralNet)
    Err: Input is not a package name or nn object
    Yong Fei
    @yongfei25
    I'm not sure if this is an issue, or it is expected in recent commits.
    Alex Wiltschko
    @alexbw
    @yongfei25 I'll check that out
    Hey, it works for me. Pull the freshest version, force-remove your autograd install, and reinstall.
    It's really easy to get Luarocks into a "dirty" state, where things don't update properly.
    Should probably write our own package manager at some point...
    Alex Wiltschko
    @alexbw
    Hey, for future reference, I'll be checking our slack channel much more than this gitter channel.
    You can join here: https://autograd.herokuapp.com/
    alex-weaver
    @alex-weaver
    @alexbw great, that's good to hear
    i've just noticed that torch.View is implemented, that might be a better choice for this use case
    basically I'm trying to use optim.lbfgs along with the optim.lswolfe line search. The way that autograd.optim wraps the original methods is incompatible with optim.lswolfe, since the optim.lswolfe implementation assumes that the parameters are all flattened in a 1-dimensional tensor.
    alex-weaver
    @alex-weaver
    I'm currently just flattening the gradients after autograd has done its work, but I was looking for a technique that didn't involve a full copy of the gradients on every evaluation. The only thing I could come up with is to work with flattened parameters, and have the forward function use narrow and reshape/view to recover the original parameter tensors - although some brief performance tests suggest that flattening the gradients isn't even remotely a bottleneck so it's probably nothing to worry about!
    alex-weaver
    @alex-weaver
    I don't know if the cost of copying the gradients will start to dominate in models with significantly more parameters, then using view might be worth it
    vislab2013
    @vislab2013
    Question: should I be posting questions/issues in this chat regarding this: https://blog.twitter.com/2016/distributed-learning-in-torch ?
    M. Farrajota
    @farrajota
    found a reddit post with a link to deepmind's go nature paper for those interested: https://www.reddit.com/r/MachineLearning/comments/42ytdx/pdf_mastering_the_game_of_go_with_deep_neural/
    Will Frey
    @willfrey
    Has anyone here used the torch-dataset package at all? I'd appreciate some help if anyone is willing. I'm having trouble with getting my Dataset object to recognize that I have a prefixURL.
    M. Farrajota
    @farrajota
    is there a way to target specific threads after they were launched ? The doc isn't clear enough about this specific task
    M. Farrajota
    @farrajota
    wrong chat srry
    Abdullah Jamal
    @abdullahjamal
    Hi , is nn.ParallelCriterion() supported in torch-autograd?
    Abdullah Jamal
    @abdullahjamal
    Hi Guys, Can we add autograd.nn.autocriterion('lossfunc') with nn.ParallelCriterion() in torch autograd? Let's say I have 3 loss function and the first loss is nn.CrossEntropyCeiterion and the other two losses are some nn.AutoCriterion(). Can I add these using nn.ParallelCriterion()?
    Tanay Mehta
    @heytanay
    Is this place even alive?