These are chat archives for
Sign in to start talking
Deep Learning API and Server
Aug 22 2016 00:56
this is probably a less than intelligent question: where does the docker version output the files? i can't seem to find any of the server files?
Aug 22 2016 03:10
This is more of a docker question than one related to this project specifically, have you read up on how to share folders with your docker instance
Aug 22 2016 04:16
you can login to docker container with command docker exec -it docker_container_id bash
the default models are under /opt/models, other files are under /opt/deepdetect
Aug 22 2016 13:37
Is it recommended to run multiple dede processes on different ports ?
on the same machine
Aug 22 2016 14:03
not really, unless you have something or some reason in mind ?
Aug 22 2016 14:21
ok, want to have some kind of production vs test setup on one machine
Aug 22 2016 14:22
yes, that's one of the settings where you'd want to have separate instances
Aug 22 2016 14:22
sometimes need to break training calls or there are crashes which would be on the test port and the production just sees the trained/finetuned models
Aug 22 2016 14:23
there shouldn't be crashes. We try to make it robust even to user mistakes, though Caffe can be difficult to recover from. But any time you can prove to crash the server deterministically, and not for lack of memory, you should report it as an issue
Aug 22 2016 15:15
Is there a simple way to get the previously running services back after a crtl-c or a crash ? Right now I just recreate them from the python interface or with curl calls one by one.
Aug 22 2016 15:16
no, you need to make a script and execute it. But there shouldn't be a crash :)
Aug 22 2016 15:42
ok got it