These are chat archives for beniz/deepdetect

10th
Apr 2017
Emmanuel Benazera
@beniz
Apr 10 2017 17:01
@cchadowitz-pf tf build on CPU requires bazel version 0.4.3, see beniz/deepdetect#231
AFAIK it's a TF thing
cchadowitz-pf
@cchadowitz-pf
Apr 10 2017 17:03
Interesting, I'll give it a shot but unless it propagates through to when caffe is built, this may be unrelated:
/opt/deepdetect/src/caffeinputconns.cc: In member function 'int dd::ImgCaffeInputFileConn::images_to_db(const string&, const string&, const string&, const string&, const bool&, const string&)':
/opt/deepdetect/src/caffeinputconns.cc:84:22: error: 'class caffe::db::DB' has no member named 'Count'
  _db_batchsize = db->Count();
also:
# apt search bazel
Sorting... Done
Full Text Search... Done
bazel/unknown,now 0.4.5 amd64 [installed]
  Bazel is a tool that automates software builds and tests.
Emmanuel Benazera
@beniz
Apr 10 2017 17:04
your version of our custom Caffe is too old
cchadowitz-pf
@cchadowitz-pf
Apr 10 2017 17:05
how is that possible? it's grabbing caffe during make...
Emmanuel Benazera
@beniz
Apr 10 2017 17:05
check that you are on the last commit
cchadowitz-pf
@cchadowitz-pf
Apr 10 2017 17:06
hah that's a pretty recent change i see, let me give the latest commit a try. i was on d7cbc37
Ayush
@ayushsangani
Apr 10 2017 18:44
hello everyone!
I’m new to using deepdetect, getting setup now. I was following the steps listed in documentation.
I’m not able to query info endpoint on localhost:8080
I have also checked that docker image listens to 8080 port, not sure what I’m doing wrong here
Ayush
@ayushsangani
Apr 10 2017 19:16
I got it, localhost was mapped to something else! stupid mistake
cchadowitz-pf
@cchadowitz-pf
Apr 10 2017 23:57
@beniz So I have it building and all (thanks for the pointers!), but even with my changes Caffe still seems to be utilizing the GPU. Here's my diff with the changes to try to force Caffe to use CPU only while letting TF build against and use CUDA and the GPU.
But if I watch nvidia-smi while starting up and loading in my combination of Caffe and TF models, even before it reaches any TF models, it starts to grab GPU memory (so, while Caffe is loading models). And unless I put the TF models first, by the time it gets to the TF models I get that same context error from TF:
F tensorflow/stream_executor/cuda/cuda_driver.cc:334] current context was not created by the StreamExecutor cuda_driver API: 0x3228000; a CUDA runtime call was likely performed without using a StreamExecutor context
Received status code at openimagesinceptionv3 (TF) test classification: 000
./startup.sh: line 402:    15 Aborted                 (core dumped) ${CURRDIR}/dede -host 0.0.0.0 -port ${SVCPORT}