These are chat archives for beniz/deepdetect

7th
Apr 2017
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 17:15
Again, I have it working to a point, it's just not giving quite the same labels/confidences as shown in that example so I'm not sure if it's missing something still :)
Emmanuel Benazera
@beniz
Apr 07 2017 17:17
I couldnt find the time to look at your stuff today. I had built an alternate pb the other day, and I still need to test it.
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 17:18
No problem! I'm just curious what differences, if any, they have. If you have some time at some point to give me a screenshot or something of the input tensors to the inception v3 subgraph like I had above, I'd be interested. :)
Emmanuel Benazera
@beniz
Apr 07 2017 17:18
do you need this model with a sigmoid urgently ?
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 17:20
I wouldn't say urgently, no. I'm evaluating to see if this is something that would be useful for us long term, and the way that it's currently normalized makes it difficult to use in conjunction with other models as well, which is why I was diving into the sigmoid stuff. I actually have the sigmoid bit worked out, it's just the input tensors that I'm not 100% sure about now.
Emmanuel Benazera
@beniz
Apr 07 2017 17:24
OK
Emmanuel Benazera
@beniz
Apr 07 2017 18:02
getting the same labels all the time can be caused by a pb in the scaling of the inputs
I don't have inception_v3 graph in other forms than the one in inception_v3.py file that comes with TF or TF models repo
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 18:27
I was using the inceptionv3 graph that comes with tf.contrib.slim, i think from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/nets/inception_v3.py
and yeah, once I took out the scaling,cropping, etc on the inputs, I get somewhat appropriate labels, but they don't quite match the ones referenced in that comment, or quite match what the current model I grabbed from deepdetect.com produced.
Emmanuel Benazera
@beniz
Apr 07 2017 18:30
it must be in the way you are freezing the graph then
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 18:43
yeah - in my gist i laid out how i was doing it. one method involved adding some preprocessing tensors (scaling, cropping, etc). the second method was simply adding the InputImage placeholder tensor at the beginning and the multi predictions sigmoid layer at the end. it's possible I should leave out the scaling/cropping, but still include the product and difference tensors since I think they may be taking care of subtracting out a mean or something....
Emmanuel Benazera
@beniz
Apr 07 2017 18:57
so, I just looked at your script. First, is the issue that probabilities are not exactly the same as reported in the issue thread ?
Second, the preproc is already done by DD
Third, the inception_v3 function builds a graph that ends with a softmax
you can change that in the inception_v3 constructor but adding a sigmoid there didn't work for me
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 18:59
you know, looking at it again, I must have been cross-eyed at the end of the day yesterday. I am getting the same labels as the comment, with close probabilities. I think I was comparing my output to the list of labels in the 'ideally' block, not the true output block in that comment.
Emmanuel Benazera
@beniz
Apr 07 2017 18:59
possibly you don't have to remove the softmax from the end points... and then you are getting both
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 18:59
good to know DD does the preproc, that explains that bit for me
Emmanuel Benazera
@beniz
Apr 07 2017 19:00
haha, had that mistake, no, your labels look within error bounds
you can actually easily check that of course by running the original python code on a series of images
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:00
so it seems like i managed to sort it out then. I'm going to load the graph into tensorboard once more to see if my code actually replaced the softmax or simply added the sigmoid to the end
which python code?
Emmanuel Benazera
@beniz
Apr 07 2017 19:00
tf code
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:01
the classify.py from the openimages repo?
Emmanuel Benazera
@beniz
Apr 07 2017 19:01
yes
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:01
ah yeah
Emmanuel Benazera
@beniz
Apr 07 2017 19:02
I think you did it. If you'd like / can share the updated pb file in an issue, I'll replace the previous model with it.
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:04
sure, i can do that.
for reference, here is the view of the end of the graph
blob
so it looks like the sigmoid replaced the softmax
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:13
Can't include the pb in the issue as it's >10mb
136.8mb uncompressed, binary .pb
If you want, I can give you a url to grab it from privately when you're able to grab it.
Emmanuel Benazera
@beniz
Apr 07 2017 19:20
sure, thanks man, and good job!
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:20
:) thanks for the help!
let me know if you still want me to create an issue for it with some details
Emmanuel Benazera
@beniz
Apr 07 2017 19:23
oh yes sure, the more information for everyone, the better. If I had something better than a Jacky script I would have added it to the repo
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 19:23
haha well it can be different for each model, as shown with this one....
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 20:48
if i can bother you again about another thing (not urgent, don't worry) - is there any chance that the caffe lib is trying to access the gpu even if "gpu": true is not specified in the PUT for service creation? we've been working around the CUDA shared streamexecutor context issue with Tensorflow by (we thought) restricting caffe models to be cpu-only and letting the tensorflow lib use the gpu on its own. But I'm getting the issue now again and I wonder if the caffe lib is still trying to use the gpu regardless. I can't use the USE_CPU_ONLY flag at compilation b/c i'd still like TF to use gpu and that flag affects the entire project, but I'd like to restrict caffe to CPU only (since TF seems to use GPU if compiled with CUDA and a gpu is available regardless). thanks again man :)
Emmanuel Benazera
@beniz
Apr 07 2017 20:51
you ll have to slightly hack the makefiles I think
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 21:09
so leaving out "gpu": true or even setting it to false won't force caffe to not use GPU?
Emmanuel Benazera
@beniz
Apr 07 2017 21:15
I think its possible caffe allocates a minimal amount on the GPU if built with GPU support. Caffe internals only have hardcoded define flags. The are controlled at build time.
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 21:16
Ah I see - from what I can see at https://github.com/beniz/deepdetect/blob/master/CMakeLists.txt#L120 it looks like if I change that line from if (CUDA_FOUND) to if (CUDA_FOUND AND NOT USE_CAFFE_CPU_ONLY) and enable that new var (USE_CAFFE_CPU_ONLY) it should do the trick....
Emmanuel Benazera
@beniz
Apr 07 2017 21:30
nothing will stop you :) yeah I think something like more granular control, with a tf_CPU_only and caffe_cpu_only would do the trick. And one for xgboost as well. use_CPU_only could then rule them all
cchadowitz-pf
@cchadowitz-pf
Apr 07 2017 21:38
i'll try out a caffe_cpu_only first and see how that goes :) currently that will fulfill my needs, as i'm not using xgboost haha. thanks!