Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 15:29
    Bycob synchronize #1345
  • 14:34
    beniz assigned #1345
  • 14:31
    beniz review_requested #1345
  • 11:46
    beniz labeled #1345
  • 10:10
    Bycob edited #1345
  • 09:36
    mergify[bot] labeled #1346
  • 09:36

    mergify[bot] on master

    doc: fix pip install command fr… (compare)

  • 09:36
    mergify[bot] closed #1346
  • 09:35
    Bycob labeled #1346
  • 09:16
    Bycob synchronize #1345
  • 09:11
    alx edited #1346
  • 09:09
    beniz labeled #1346
  • 09:09
    beniz labeled #1346
  • 08:48
    Bycob synchronize #1345
  • 08:30
    alx opened #1346
  • 08:29

    alx on doc-python-client-pip-install-git

    doc: fix pip install command fr… (compare)

  • 08:20
    Bycob labeled #1345
  • 08:20
    Bycob labeled #1345
  • 08:20
    Bycob labeled #1345
  • 08:20
    Bycob labeled #1345
Ananya Chaturvedi
@ananyachat
I tried that too
Emmanuel Benazera
@beniz
then your problem is elsewhere
what's your cmake version ? cmake --version
Ananya Chaturvedi
@ananyachat
it is 3.14
do you think it could be because I using a linux instance on a macbook instead of an actual linux os?
Louis Jean
@Bycob
USE_TORCH, USE_TF etc are persistent between cmake calls. Did you try to switch them on the same command line, e.g
cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DUSE_TF=OFF
Emmanuel Benazera
@beniz
or rm -rf of the build dir yes...
Louis Jean
@Bycob
Normally cmake displays a list of all the options, but it may not be the case if there is an error during configure step...
Something like this
-- RELEASE:               OFF
-- WARNING:               ON
-- USE_COMMAND_LINE:      ON
-- USE_JSON_API:          ON
-- USE_HTTP_SERVER:       ON
-- USE_HTTP_SERVER_OATPP: ON
-- BUILD_SPDLOG:          ON
-- BUILD_PROTOBUF:        ON
-- BUILD_TESTS:           ON
-- BUILD_TOOLS:           OFF
-- USE_CAFFE:             OFF
-- USE_CAFFE_CPU_ONLY:    OFF
-- USE_CAFFE_DEBUG:       OFF
-- USE_CAFFE2:            OFF
-- USE_CAFFE2_CPU_ONLY:   OFF
-- USE_TORCH:             ON
-- USE_TORCH_CPU_ONLY:    OFF
-- USE_SIMSEARCH:         OFF
-- USE_TF:                OFF
-- USE_TF_CPU_ONLY:       OFF
-- USE_NCNN:              OFF
-- USE_HDF5:              ON
-- USE_TENSORRT:          OFF
-- USE_DLIB:              OFF
-- USE_DLIB_CPU_ONLY:     OFF
-- USE_ANNOY:             OFF
-- USE_FAISS:             ON
-- USE_FAISS_CPU_ONLY:    OFF
-- USE_CUDNN:             OFF
-- USE_XGBOOST:           OFF
-- USE_XGBOOST_CPU_ONLY:  ON
-- USE_TSNE:              OFF
-- USE_BOOST_BACKTRACE:   ON
Ananya Chaturvedi
@ananyachat

USE_TORCH, USE_TF etc are persistent between cmake calls. Did you try to switch them on the same command line, e.g
cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DUSE_TF=OFF

Okay this suggestion by @BynaryCobweb gave a different error message that "Could NOT find CUDNN (missing: CUDNN_LIBRARY CUDNN_INCLUDE_DIR)". So may be all the problem is there because I have not installed cudnn.

I'll try doing that
thanks
Emmanuel Benazera
@beniz
you don't need CUDNN if you have no GPU
For a CPU only build, this should work fine : cmake .. -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DBUILD_TESTS=OFF -DUSE_CPU_ONLY=ON -DUSE_HTTP_SERVER_OATPP=ON -DUSE_HTTP_SERVER=OFF -DBUILD_SPDLOG=ON
you may want to completely wipe your build/ dir and start a build from scratch.
Ananya Chaturvedi
@ananyachat
oh okay
I'll try to start a build from scratch
thanks
It worked!
thanks a ton
Emmanuel Benazera
@beniz
you're welcome
Romain Guilmont
@rguilmont
Hello guys ! hope you're doing well :) Bad news : the memory leak we talked about weeks ago is still there... But good news, with @YaYaB we identified exactly the cause, here you have it :) jolibrain/deepdetect#1316
Emmanuel Benazera
@beniz
Hi @rguilmont sure thanks, someone will look into the gzip deflate. Maybe the code has changed while integrating the new oatpp layer.
This is vacation time, so this may take some time.
Romain Guilmont
@rguilmont
Enjoy your vacations ;)
YaYaB
@YaYaB
Hello guys! Hope you get some sun where you are ^^
I've spotted an issue related to the last version of DD using tensorrt and a detection model (refinedet), a script is added to replicate.
jolibrain/deepdetect#1324
Thanks in advance for your help :)
dgtlmoon
@dgtlmoon

Any recommendations for multi label classification instead of building models for different classification classes? (say color and size), is it just a lmdb extraction hack? I see there is

multi_label    bool    yes    false    whether to setup a multi label image task (caffe only)

but I'm unsure if there's something extra todo in the training stage? or what to expect etc

Emmanuel Benazera
@beniz
Hello, multi label is when labels are not mutually exclusively. I believe we haven't used this for some time as datasets are rare in our industries.
Format is image path and class values separated by spaces.
It may we'll go wrong, let us know here.
dgtlmoon
@dgtlmoon
thanks!
dgtlmoon
@dgtlmoon

Something up?

ProBook-440-G7:~/deepdetect/code/cpu$ CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.25.8) or chardet (2.3.0) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Pulling platform_ui (jolibrain/platform_ui:v0.18.0)...
ERROR: manifest for jolibrain/platform_ui:v0.18.0 not found: manifest unknown: manifest unknown
dgtlmoon@dgtlmoon-HP-ProBook-440-G7:~/deepdetect/code/cpu$

last commit is

commit 36caf768b6e57c03fd3132495f1834bf9fc58608
Author: Emmanuel Benazera <emmanuel.benazera@jolibrain.com>
Date:   Fri Jun 11 17:03:14 2021 +0200

    feat(platform_ui): upgrade to v0.18.0
dgtlmoon
@dgtlmoon
@alx looks like you forgot to push v0.18.0 to dockerhub, Ive created a nice little PR, hope it helps https://github.com/jolibrain/platform_ui/pull/29/files
Emmanuel Benazera
@beniz
hi @dgtlmoon you can use the last v0.17.6, it's because our release system does not push when two versions are identical :(
you need to modify the docker-compose file
dgtlmoon
@dgtlmoon
@beniz yeah but this is no good for new people to the project.. it's always nice when the documentation works out of the box right?
two versions are identical... mmm.. but thats ok right? you can have the same tag (v0.18.0) in different git repos for different parts of your project (the platform_ui, and other components)
Alexandre Girard
@alx
Hi @dgtlmoon , thanks for your feedback, v0.18.0 has been pushed on dockerhub: https://hub.docker.com/r/jolibrain/platform_ui/tags
dgtlmoon
@dgtlmoon
Glad to be of help :)
image.png
dgtlmoon
@dgtlmoon
@alx one more thing..
$ sh update.sh
update.sh: 8: function: not found
-e example with *another_cpu* project name: ./update.sh -p another_cpu

update.sh: 10: Syntax error: "}" unexpected
Alexandre Girard
@alx
https://github.com/jolibrain/dd_platform_docker/issues/56#issuecomment-833396315 similar issue, can be fixed by using bash instead of sh, I'll fix the documentation
dgtlmoon
@dgtlmoon
@alx awesome, yeah, best is if the docs are fixed :)
dgtlmoon
@dgtlmoon
@alx question, https://github.com/jolibrain/platform_data/blob/main/Dockerfile why does this use rsync at all to copy? why not just extract to the target anyway?
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C models/
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C "/platform/models/
Alexandre Girard
@alx
@dgtlmoon I dont remember the precise issues we had around fetching these large archives, but we had some and this workflow worked for us at the time so we havn't modified it since then
dgtlmoon
@dgtlmoon

worked fine in jolibrain/deepdetect_gpu:v0.14.0 but in v0.15.0

$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.15.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=10532 /var/lib/docker/overlay2/173ecbc018515b1d7e625d8c63e0e20a3ba60c6505327129636ecabbea9079ba/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled

Maybe a change in cuda version requirements? 10.2 here

$ nvidia-smi
Tue Aug 17 07:14:21 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   29C    P8     8W / 180W |    310MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
Emmanuel Benazera
@beniz
nvidia drivers are incompabible with these old images, use 0.18
dgtlmoon
@dgtlmoon
@beniz same error using tag :v0.18.0
$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.18.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=4936 /var/lib/docker/overlay2/88d414554261b3ec66d24267bb784ceee1c9d562d01b388acee1d1d49444a314/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
Emmanuel Benazera
@beniz
your environment issue then, try running nvidia-smi