These are chat archives for beniz/deepdetect
predictcall to 25, and sometimes it works perfectly fine, other times it crashes with a segmentation fault. I saved my server log to the gist linked below, I can also paste a sample predict call that I'm using (I'm using a python client in this case) if need be. This is running built with both Caffe and TF support, TF on GPU, Caffe on CPU, and while I have models loaded for both mllibs and am hitting all of them occasionally as a health check, the script I'm running is only hitting a Caffe model, so I would expect that to remove any possible GPU-associated complications. This is running in a docker container.