These are chat archives for beniz/deepdetect

9th
Feb 2018
dgtlmoon
@dgtlmoon
Feb 09 2018 11:25
@beniz Yeah.. OK. I'm inching closer, there was a hardcoded path in the imgsearch.py that was looking in another directory for the weights (which was not documented) /data1/jb/models/public/googlenet/, I've compiled and ran under nvidia-docker , I get a positive looking log output but it still fails when I run python ./imgsearch.py --index /images
INFO - 11:21:53 - Network initialization done.[11:21:53] /opt/deepdetect/src/caffelib.cc:414: Using pre-trained weights from /data1/jb/models/public/googlenet//VGG_VOC0712_SSD_300x300_iter_60000.caffemodel

INFO - 11:21:53 - Ignoring source layer mbox_loss[11:21:53] /opt/deepdetect/src/caffelib.cc:2157: Net total flops=31207751680 / total params=26064064

INFO - 11:21:53 - Name:                          Quadro P4000
INFO - 11:21:53 - Total global memory:           8508145664
INFO - 11:21:53 - Total shared memory per block: 49152
INFO - 11:21:53 - Clock rate:                    1480000
INFO - 11:21:53 - Total constant memory:         65536
INFO - 11:21:53 - Number of multiprocessors:     14
INFO - 11:21:53 - Kernel execution timeout:      No

[11:21:53] /opt/deepdetect/src/caffelib.cc:1310: exception while filling up network for prediction
[11:21:53] /opt/deepdetect/src/services.h:513: service imgserv prediction call failed
My /images has 512 images at 224x224 pixels as per the imgsearch.py imgsearch.py:width = height = 224
My ./model/ correctly contains bvlc_googlenet.caffemodel
nvidia-smi looks very happy running dede
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     22134      C   ./dede                                       147MiB |
+-----------------------------------------------------------------------------+
(I'm running all of this inside of the docker container, I use nvidia-docker run -v images etc etc .. to start it, and then docker exec -it (name) bash and run the python commands, I can see all of the images in /index just fine
dgtlmoon
@dgtlmoon
Feb 09 2018 11:34
it must be frustrating to answer these seemingly "basic" questions I know :( Happy to help tidy up the python once I'm up and running
Emmanuel Benazera
@beniz
Feb 09 2018 11:43
The hardcoded path must be from a bad internal merge, we'll fix it.
dgtlmoon
@dgtlmoon
Feb 09 2018 11:44
@beniz no problems, I "worked around" that by mounting it into the docker containers there anyway..
Emmanuel Benazera
@beniz
Feb 09 2018 11:44
Try on cpu first
dgtlmoon
@dgtlmoon
Feb 09 2018 11:45
@beniz same error on CPU, thought it might be a resource issue so I spun up an instance on paperspace.com and tried it there
trying to cover all the bases before annoying you :)
Emmanuel Benazera
@beniz
Feb 09 2018 11:46
Use 300x300
dgtlmoon
@dgtlmoon
Feb 09 2018 11:47
2 sec
[11:47:43] /opt/deepdetect/src/caffelib.cc:1310: exception while filling up network for prediction
[11:47:43] /opt/deepdetect/src/services.h:513: service imgserv prediction call failed
so, the images are all JPG, 300x300 , 3 channel (RGB) I've also tried single channel (grayscale) and get same error
I did not change the 224 setting in the imgsearch.py
trying that now..
nope, changed that too, width = height = 300 same error
dgtlmoon
@dgtlmoon
Feb 09 2018 11:53
the x value in classif = dd.post_predict(sname,x,parameters_input,parameters_mllib,parameters_output) correctly contains the paths to my batch of images, all of which can be ls'ed and look fine from within docker
shutting down my GPU instance for now, since same error is on CPU, dont think I need it
dgtlmoon
@dgtlmoon
Feb 09 2018 11:58
maybe someone else could confirm it works?
Emmanuel Benazera
@beniz
Feb 09 2018 11:59
it all works, it's unit tested etc
you have run your own path somehow. Follow the exact readme for googlenet + imgsearch.py and it'll work
it doesn't make sense to use VOC model with imgsearch.py
dgtlmoon
@dgtlmoon
Feb 09 2018 12:00
Ok, i'll take a break/lunch and try over again
readme for googlenet ? you mean the README.md at demo/imgsearch/ ?
Emmanuel Benazera
@beniz
Feb 09 2018 12:01
correct
dgtlmoon
@dgtlmoon
Feb 09 2018 12:02
could you tidy up the bad path in imgsearch.py ? I dont see how it can be all unit tested when it has a broken path included with the python script
Emmanuel Benazera
@beniz
Feb 09 2018 12:02
it'done
dgtlmoon
@dgtlmoon
Feb 09 2018 12:02
you are fantastic :)
Emmanuel Benazera
@beniz
Feb 09 2018 12:02
the C++ backend is unit tested
the demos are for users to play with
dgtlmoon
@dgtlmoon
Feb 09 2018 12:03
yeah, i feel the backend is fine, but missing some small step somewhere, will try again with a fresh mind
it doesn't make sense to use VOC model with imgsearch.py
yeah... so maybe the README.md is confusing about that, can you confirm what files should be in model_repo = ?
should be http://www.deepdetect.com/models/ggnet/bvlc_googlenet.caffemodel ?
(yes, as per the readme)
Emmanuel Benazera
@beniz
Feb 09 2018 12:07
the Python demo part of the readme has it all AFAIK
dgtlmoon
@dgtlmoon
Feb 09 2018 16:35
@beniz are you running a Patreon so I can throw a few pizza dollars your projects way?
dgtlmoon
@dgtlmoon
Feb 09 2018 17:03
image.png
yeah so, there's still some detail missing, [16:56:31] /opt/deepdetect/src/services.h:495: service imgserv mllib bad param: no deploy file in /opt/deepdetect/demo/imgsearch/model for initializing the net in my /opt/deepdetect/demo/imgsearch/model I have only bvlc_googlenet.caffemodel which is 100% correct according to the README.md
the README is saying, that ./model/ should not exist, create it, and add that one file ..
Emmanuel Benazera
@beniz
Feb 09 2018 17:06
uncomment the template in imgsearch.py, this is very likely a merge side effect
dgtlmoon
@dgtlmoon
Feb 09 2018 17:10
aah two sec
@beniz just to clarify, the imgsearch_dd.py is for similarity and imgsearch.py is for searching?
dgtlmoon
@dgtlmoon
Feb 09 2018 17:17
@beniz yes! fantastic! that appears to be doing something!
Emmanuel Benazera
@beniz
Feb 09 2018 17:19
Imgsearch_dd used the built-in search from DD, it s more compact and faster for production setups
dgtlmoon
@dgtlmoon
Feb 09 2018 17:20
ahuh, i thought so, cool
dgtlmoon
@dgtlmoon
Feb 09 2018 19:50
getting there...
dgtlmoon
@dgtlmoon
Feb 09 2018 20:57

@beniz oook, so, I can index and search finally (yay!), now when I search, I suspect the layer is too general (or "high level") and does not return the tshirts with similar designs printed on them (it does however find the "searched tshirt" from the bunch), what's your suggestion from here? I've tried using

#extract_layer = 'loss3/classifier'
extract_layer = 'pool5/7x7_s1'

which is a different level in the net, but the results seem to be the same (I re-indexed and re-searched with that different extract_layer), I might re-test with the clothing model.. but really.. it's trying to find similar patterned/colored/etc prints.. I used to use ORB/hamming-space descriptor matches for this

I guess.. probably I would need to retrain a new model based on my existing 200,000+ images, which can be grouped into about 30,000 categories...
Emmanuel Benazera
@beniz
Feb 09 2018 21:18
Try using the t-shirts inner crops if you have them.
dgtlmoon
@dgtlmoon
Feb 09 2018 21:21
for the search reference image, or for indexing too?
I'm assuming this is going to work scale invariant and rotational invariant etc