Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Sanuj Sharma
@sanuj
thanks @elbamos :smile: , i'll try this tomorrow. I hope it works.
elbamos
@elbamos
@sanuj it definitely works. There are examples included with dp specifically to show how to handle when you have a dataset that's too large to fit in memory.
Jacky Yang
@anguoyang

Hi, all,
Could anyone be kindly to help me on this issue? thanks:

We have lots of photos/images, say 10 million or more, they are original photos/images from our customers which need to be protected(To prevent plagiarism), here we call it as dataset A.
We also got lots of images by way of web crawler, from bloggers, websites, forum, etc. some of these images are simply copied from dataset A, some added with additional watermark, we call it as dataset B. it currently contains about 300000 images, but will grow day by day.
We will use 1 image or several images from dataset A, we call it as dataset C, we want to search images in B which is similar with C, and list all similar images.

We want to use deep learning for similarity search, but most of the images in dataset A has no tag, could we train these images into a specific model, then we could get more accurate result while searching similar images?

Thanks a lot for your patience to read this long requirement, and have a nice day!

elbamos
@elbamos
@anguoyang that's very similar to work I've done - you can pm me
Sanuj Sharma
@sanuj
hey @elbamos i was trying to do transfer learning with dp. I want different learning rates for each layer in my cnn. How can I do that? Here is the script that I'm using.
elbamos
@elbamos
@sanuj In dp, when you create your dp.Optimizer objection, you define a function called callback. The callback function is executed after every batch on the training set, and it performs the actual parameter updates. Your script uses the simple callback from one fo the dp examples. If you want a learning pattern other than simple SGD - like adding momentum, norms, cutoffs, etc. - you do it in dp by modifying the callback function.
@sanuj Can we assume that you were able to resolve the large-data issue you were having a few weeks ago?
Sanuj Sharma
@sanuj
@elbamos thanks for your reply. I had used ImageSourcewhich allows to read each batch from the hard drive but it was slow as I don't have an SSD. I couldn't make it read data for each epoch instead of each batch but then I don't need it anymore so didn't try further.
Jay Devassy
@jaydevassy
I have a trained convnet for object identification. I need to "run" it on a larger image to locate the target object in the larger image. How do I leverage the inbuilt convolution operation/module in torch to do this. Not worried about scale or rotation invariance at this point. Basically trying to avoid using a sliding window approach over the larger image which would be inefficient (most of the computations would be thrown away at next window position). Any ideas or pointers? Thx
elbamos
@elbamos
@jaydevassy train a conv net as a pixel classifier or build an attention model
Lior Uzan
@ghostcow
Hey guys, anyone know why this was merged into dp?
nicholas-leonard/dp#197
the commit msg reads that its to circumvent the lua 2GB address space limit, but before it was also a Tensor and Tensor memory isn't stored in the lua heap, and there shouldn't be a problem using >2GB tensors at all. So what's the idea here?
Jin Hwa Kim
@jnhwkim
somewhat harsh question: how about torchnet comparing with dp? straighforward implementation
young for wild cases.
Soumith Chintala
@soumith
@jnhwkim both are very similar
Nicholas Léonard
@nicholas-leonard
@ghostcow I just merged it because Float is indeed more efficient than Double. But you are right that it does not circumvent the 2GB limit. For that you should install torch with Lua instead of LuaJIT (see torch.ch getting started).
I would recommend taking a look at torchnet if you like dp's style. For myself, I now prefer to do without either, and just write my own training scripts (more flexible in the end).
elbamos
@elbamos
@nicholas-leonard Maybe dp2 could wrap/extend torchnet? I continue to find considerable benefit to the design pattern framework implemented by dp. But I'm wondering if your interests have really just progressed at this point?
Nicholas Léonard
@nicholas-leonard
Yes in a way they have progressed. But you are right that torchnet could definitely benefit from some extensions
Remi
@Cadene

@nicholas-leonard

I would recommend taking a look at torchnet if you like dp's style. For myself, I now prefer to do without either, and just write my own training scripts (more flexible in the end).

Could you please be more explicit and include examples to illustrate you statement ?
It must be easier to train non standard models such as adversarial networks on a dedicated architecture, but it takes a lot of time to code/test.
Would it possible to add a plugin to dp / torchnet for doing that ?

biterbilen
@biterbilen
any experience with anomaly detection (one-class calssifiers?)
Abdullah Jamal
@abdullahjamal
Hi guys, can we use dp. GCN or dp.ZCA for any other dataset ? In examples, it looks like the ZCA or GCN can only be used if dataset is coming for dp library
AnkurRaj
@AnkurRaj
autoencoder source code
Punita Ojha
@punitaojha
This message was deleted
Renato Marinho
@renatomarinho
This message was deleted
Renato Marinho
@renatomarinho
This message was deleted