These are chat archives for beniz/deepdetect

21st
Sep 2016
dyngts
@dyngts
Sep 21 2016 04:22
Hi all, i'm curious about the DD's SLA (service level agreement) in terms of the likelihood of service availability and the time to predict a single image or bulk of images. Can anyone explain it? Thank you!
dyngts
@dyngts
Sep 21 2016 05:41
Another question, is DD support prediction for bulk images? Let's say at one API call, we're requesting prediction with 10 images and return the prediction label for each image at once. Thank you!
Emmanuel Benazera
@beniz
Sep 21 2016 06:51
yes to your second answer
SLA is for customers if course. Time to predict images is worked out with customer based on requirements etc... as it depends on the size of the neural net. @dyngts
dyngts
@dyngts
Sep 21 2016 07:06
@beniz : do you have any example how DD handle multiple of images in single forward?
and how is Caffe handle prediction for multiple image? thank you
Emmanuel Benazera
@beniz
Sep 21 2016 07:09
Sure fill up the data array with several images URIs.
@alkamid hi, if you're still interested, the PR for configurable multiple GPUs via API is #192
Emmanuel Benazera
@beniz
Sep 21 2016 07:14
@dyngts and the bottom of http://deepdetect.com/tutorials/es-image-classifier/ has a two images call if you really need to see it written.
dyngts
@dyngts
Sep 21 2016 07:39

@beniz: so it means we just put it like this?:

net.blobs["data"].data[...] = list_of_images # array of transformed images

How batch size in deploy.prototxt affect this bulk prediction? Do we set the ideal batch size based on the number of images that predicted in one forward? let's say we predict 50 images at once. how do we define batch size? Thank you again

Emmanuel Benazera
@beniz
Sep 21 2016 07:44
you seem to be talking about Caffe... DD builds on a slightly custom Caffe, we do only support usage through DD API calls.
dyngts
@dyngts
Sep 21 2016 07:46
@beniz : Yes, i'm talking about pure Caffe. Ok noted, sorry for the irrelevant topic. But, do you know if original Caffe support for bulk images?
and how DD's bulk upload mechanism? is it iterated per image for prediction or apply multi image in single forward?
Emmanuel Benazera
@beniz
Sep 21 2016 07:56
it's all bulk of course. You'd need to look at DD code.
dyngts
@dyngts
Sep 21 2016 08:33
@beniz : if you don't mind, which file that contains bulk prediction function? thanks
Emmanuel Benazera
@beniz
Sep 21 2016 08:36
caffelib.cc and the img*h files. My advice: if you are using pure Caffe, ask on Caffe discussion group and gitter, this sounds like a common question, you are just in the wrong channel for it.
dyngts
@dyngts
Sep 21 2016 08:36
oke thanks @beniz for the great response
Kumar Shubham
@kyrs
Sep 21 2016 18:05
@beniz do you have any future plan of integrating torch with DD ??
Emmanuel Benazera
@beniz
Sep 21 2016 18:15
I believe it's been done by a customer and I'm talking to them. No more than that for now.
I ve figured the Caffe LSTM this morning btw, so we'll move forward
Kumar Shubham
@kyrs
Sep 21 2016 18:20
what changes did you made for integration of LSTM in dd ??
Emmanuel Benazera
@beniz
Sep 21 2016 18:28
I haven't added code yet, just put it all on paper. The comments I've linked to in the tickets explain it all, though it is a bit blurry at first :) Very good implementation in Caffe btw.
Kumar Shubham
@kyrs
Sep 21 2016 18:34
Great!! in the mean time shall I start looking into seq2seq learning . If we are able to integrate LSTM then integration of seq2seq would be easy . only thing that would be needed is a proper .prototxt file
Emmanuel Benazera
@beniz
Sep 21 2016 18:43
the path to lstm is not yet complete though :) Read the explanation by Jeff Donahue if you have time, it's useful to clearly understand how it works. You can ask me when things are not clear.
Kumar Shubham
@kyrs
Sep 21 2016 18:47
ok.. lets complete LSTM first :P
Emmanuel Benazera
@beniz
Sep 21 2016 18:51
it's not too difficult, the inputs need to be TxNx... See if you can understand it when you have a moment, even if I add some code, you'll find it useful I'm sure.
Kumar Shubham
@kyrs
Sep 21 2016 19:06
as per my understanding . this TxNx... basically represent a way of streaming long and short sentences without converting the overall sentence into fixed length vector using padding as was being done in keras. correct me if I am wrong .
still I am sceptical about the value of T , N for a very big sentences. for example paragraph in an article.
Emmanuel Benazera
@beniz
Sep 21 2016 20:12
the trick is you can even use T=1 :)
the only constraint seems to be that for exact computation sentences need to remain within a single batch
now that you so quickly got this, you can see that the only modifications on DD side are the input in Datum objects, see caffeinput*
Also there are two inouts now, because the need of Delta.
need to go, will be back tomorrow.