These are chat archives for beniz/deepdetect

24th
Mar 2017
Falak
@falaktheoptimist
Mar 24 2017 04:54
Hi! We've downloaded your pretrained inception models for Tensorflow (one example is https://deepdetect.com/models/tf/inception_resnet_v2.pb) .However, the output layer (softmax) has one extra class. (1001 instead of 1000). I also checked the mapping against Imagenet dataset and the output corresponds to 1 index shifted version of the original imagenet mapping. What does class 0 correspond to in these models?
Emmanuel Benazera
@beniz
Mar 24 2017 06:03
As indicated I think, this model is from Google. Class 0 is the background class, meaning none of the others.
Falak
@falaktheoptimist
Mar 24 2017 07:47
ok Thanks..
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:30
@beniz Does DeepDetect support multiple models in a single 'predict' call? I.e. if there are two models A and B loaded up, could you use a single predict call on one or more URIs to hit both model A and model B?
Emmanuel Benazera
@beniz
Mar 24 2017 20:31
no, we have the idea of a /chain call to do things like this but we haven't executed it yet. Sill learning from usage I guess.
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:32
Ah I see - just occurred to me as I'd often be passing the same URI(s) to the service multiple times for different models. Thanks!
Emmanuel Benazera
@beniz
Mar 24 2017 20:34
That's a good point as to figure what would be the most efficient. if you have ideas on how best this would be made available API wide, don't shy away.
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:37
I guess offhand I imagined it as simply a list of service/parameter/output blocks, with the same data field for URI(s). I hadn't really thought too much about it
Marc Reichman
@marcreichman
Mar 24 2017 20:38
Hi @beniz, one option that things like Elasticsearch uses is multi-index on the same URL, which would translate as multi-service in deepdetect.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html GET /twitter/tweet,user/_search?q=user:kimchy
then, you could do the work to resolve the data URLs or paths once, and just apply the services in sequence.
Emmanuel Benazera
@beniz
Mar 24 2017 20:38
ok both good calls, though, simple API wise.
you guys fight on the best, resource or parameter ^^
Marc Reichman
@marcreichman
Mar 24 2017 20:40
From where you are now in the current /predict call, simply turning the service list into a CSV would be the most straightforward
Emmanuel Benazera
@beniz
Mar 24 2017 20:40
parallelization is a thing, multi-gpu etc..
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:41
isn't parallelization already a thing regarding handling multiple calls to the service at once? i know right now it's actually blocking but..... :)
Emmanuel Benazera
@beniz
Mar 24 2017 20:41
@marcreichman agreed. Let me think about it.
@cchadowitz-pf it's blocking per service.
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:42
i thought it was per backend-mllib?
Emmanuel Benazera
@beniz
Mar 24 2017 20:43
The lock is per service, not backend, it should :)
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:45
huh interesting, I'll have to go back and look at the logs again. occasionally I saw it (re)loading the network into memory through the course of processing numerous requests for a while, and I couldn't quite figure out why....
Emmanuel Benazera
@beniz
Mar 24 2017 20:45
a same Caffe service cannot accommodate multiple parallel calls, Caffe on different nets can.
it would reload model after an error, look it up
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:47
ahah - right, forgot about that distinction. it was numerous calls to the same multiple services, so inevitably the same service would get multiple calls at some point. would it ever reload a model due to memory usage? I doubt that's the reason in this case as I had plenty of memory available, just curious if/how that'd affect it. I know on gpu is a different situation.
Emmanuel Benazera
@beniz
Mar 24 2017 20:49
the batch size could esmxhaust the memory
no error in log ?
btw recent version reports more thorough errors from backend, might help
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:49
this was a while back last I looked, but no I never saw errors, just occasionally saw the output of loading the network definition etc into memory scrolling by
how recent?
Emmanuel Benazera
@beniz
Mar 24 2017 20:51
mmmh Monday or Tuesday :) not sure how it would reload without error, will look it up
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:52
oh haha - definitely not using a version that recent :) again, I don't have specific details about when/why/how it's reloading in those cases, but if I see it happen again I'll note it here
Emmanuel Benazera
@beniz
Mar 24 2017 20:54
ok. dont update when you don't need to. we are not versioning yet in purpose but later on we will. it is safe to allow a few weeks to allow bugs to surface.
cchadowitz-pf
@cchadowitz-pf
Mar 24 2017 20:56
:thumbsup: I think we're pre-TF v1 still, so we will see. At some point once we've organized some of our things, we may be able to contribute some Dockerfiles as well. We're currently on Ubuntu 16.04 with and without CUDA support, with TF support.