A deep learning library for streamlining research and development using the Torch7 distribution.
th> a = nn.Linear(2,2)
th> a.output:size()
[torch.LongStorage of size 0]
/usr/local/share/lua/5.1/nn/Linear.lua:99: invalid arguments: CudaTensor number CudaTensor CudaTensor
expected arguments: *CudaTensor~2D* [CudaTensor~2D] [float] CudaTensor~2D CudaTensor~2D | *CudaTensor~2D* float [CudaTensor~2D] float CudaTensor~2D CudaTensor~2D
stack traceback:
module:backward()
and, tracing back, seems to imply that gradInput
is of the wrong type. I'm using the stock callback function (mostly). Can anyone suggest how to track this down? I'm trying to avoid spending a day diving into the dirty bits of where dp, dpnn, and nn intersect.