These are chat archives for beniz/deepdetect
"gpu": trueis not specified in the PUT for service creation? we've been working around the CUDA shared streamexecutor context issue with Tensorflow by (we thought) restricting caffe models to be cpu-only and letting the tensorflow lib use the gpu on its own. But I'm getting the issue now again and I wonder if the caffe lib is still trying to use the gpu regardless. I can't use the
USE_CPU_ONLYflag at compilation b/c i'd still like TF to use gpu and that flag affects the entire project, but I'd like to restrict caffe to CPU only (since TF seems to use GPU if compiled with CUDA and a gpu is available regardless). thanks again man :)
"gpu": trueor even setting it to false won't force caffe to not use GPU?
if (CUDA_FOUND AND NOT USE_CAFFE_CPU_ONLY)and enable that new var (
USE_CAFFE_CPU_ONLY) it should do the trick....