These are chat archives for beniz/deepdetect

30th
May 2017
roysG
@roysG
May 30 2017 04:14

Hi,
I created new service:

curl -X PUT "http://localhost:8080/services/getAge" -d '{
"mllib":"caffe",
"description":"clothes classification",
"type":"supervised",
"parameters":{
"input":{
"connector":"image"
},
"mllib":{
"nclasses":101
}
},
"model":{
"repository":"/home/roy/models/ages"
}
}'

and make POST of predict with GPU true

curl -X POST "http://localhost:8080/predict" -d '{
"service":"getAge",
"parameters":{
"output":{
"best":5
}
},
"data":["http://4.bp.blogspot.com/-uwu7SmTbBXI/VD_NNJc4Y-I/AAAAAAAAK1I/rt9de3mWXJo/s1600/faux-fur-coat-winter-2014-big-trend-10.jpg"]
}'

This result was ok:

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"getAge","time":2028.0},"body":{"predictions":[{"uri":"http://4.bp.blogspot.com/-uwu7SmTbBXI/VD_NNJc4Y-I/AAAAAAAAK1I/rt9de3mWXJo/s1600/faux-fur-coat-winter-2014-big-trend-10.jpg","classes":[{"prob":0.08206301182508469,"cat":"27"},{"prob":0.07462707161903382,"cat":"25"},{"prob":0.07206585258245468,"cat":"33"},{"prob":0.05732019990682602,"cat":"37"},{"prob":0.055148329585790637,"last":true,"cat":"29"}]}]}}

But i really do not know if gpu is working, cause i do not feeling difference with the speed call

there is any way to clarify with some flag that show me that the gpu is working?
Sorry, this is the post with the
var code ="curl -X POST "http://localhost:8080/predict" -d '{
"service":"getAge",
"parameters":{
"output":{
"best":5
},
},
"data":["http://4.bp.blogspot.com/-uwu7SmTbBXI/VD_NNJc4Y-I/AAAAAAAAK1I/rt9de3mWXJo/s1600/faux-fur-coat-winter-2014-big-trend-10.jpg
"]
}
roysG
@roysG
May 30 2017 04:20

*I also try add to post the flag (GPU:true) :

curl -X POST "http://localhost:8080/predict" -d '{
"service":"getAge",
"parameters":{
"output":{
"best":5
},"mllib":{"gpu":true}
},
"data":["http://4.bp.blogspot.com/-uwu7SmTbBXI/VD_NNJc4Y-I/AAAAAAAAK1I/rt9de3mWXJo/s1600/faux-fur-coat-winter-2014-big-trend-10.jpg"]
}'

roysG
@roysG
May 30 2017 04:27
*Sorry for the "spamming" my keyboard is not working well
Emmanuel Benazera
@beniz
May 30 2017 08:17
Try gpu:false and see whether the time in response changes. Also the server logs do tell you what GPU in on.
rdoume
@rdoume
May 30 2017 08:22
@beniz HI, regarding this text tutorial, I might have an idea of why. The server is looking for caffe models in models/n20, but there is none as it is a fresh install. Hence why it's not working on my side.
Emmanuel Benazera
@beniz
May 30 2017 08:28
The model is created by the training call
rdoume
@rdoume
May 30 2017 08:38

Then, I'm quite puzzled. Here's what I do :
curl -X PUT "http://10.100.2.1:34445/services/n20" -d '{ "mllib":"caffe", "description":"newsgroup classification service", "type":"supervised", "parameters":{ "input":{ "connector":"txt" }, "mllib":{ "template":"mlp", "nclasses":20, "layers":[200,200], "activation":"relu" } }, "model":{ "templates":"../templates/caffe/", "repository":"models/n20" } }'
And I receive this

{"status":{"code":400,"msg":"BadRequest","dd_code":1006,"dd_msg":"Service Bad Request Error"}}

On the server side, here's what i got :

E0530 08:24:13.457593 81 caffemodel.cc:69] error reading or listing caffe models in repository ../models/n20/

ERROR - 08:24:13 - Tue May 30 08:24:13 2017 UTC - 10.242.6.18 "PUT /services/n20" 400 0

Emmanuel Benazera
@beniz
May 30 2017 11:06
Use an absolute path to your models/n20 repository