These are chat archives for beniz/deepdetect

Apr 2016
Jacky Yang
Apr 08 2016 07:56
hi, @beniz , I found a critical problem, and opened a issue
Emmanuel Benazera
Apr 08 2016 09:46
so @anguoyang have you looked at the memory usage ? It is very likely that your machines does not have enough RAM for all the models you are using in parallel
Jacky Yang
Apr 08 2016 11:28
hi, @beniz , yes, it is out of memory. I found that if the server received a post/predict , it will load all related info into the memory, it will not free the memory automatically after send out the prediction result
for example, I send a gender prediction request to the server, get the result, and then send an age request, the memory for the gender still there, will not be freed automatically, so, if I send all types of prediction, then all models memory are there, and then, out of memory.
is it possible to free the memory after prediction?
Emmanuel Benazera
Apr 08 2016 11:42
you'd need to DELETE the service
Apr 08 2016 18:55
Hi @beniz , managed to that that xgboost tested, but it performed worse than the mlp ones
you can check the data in the usual place if you are curious about it
and for the python script, I'm going to put the options in a config file, I think that would be the best option