Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 18 13:16
    beniz labeled #1340
  • Sep 17 09:37
    mergify[bot] labeled #1343
  • Sep 17 09:37

    mergify[bot] on master

    fix(cpu): cudnn is now on by de… (compare)

  • Sep 17 09:37
    mergify[bot] closed #1343
  • Sep 17 07:39
    fantes labeled #1343
  • Sep 17 07:37
    mergify[bot] review_requested #1343
  • Sep 17 07:37
    mergify[bot] review_requested #1343
  • Sep 17 07:36
    fantes labeled #1343
  • Sep 17 07:36
    fantes labeled #1343
  • Sep 17 07:36
    fantes labeled #1343
  • Sep 17 07:36
    fantes opened #1343
  • Sep 16 11:03
    Bycob labeled #1342
  • Sep 16 09:58
    Bycob review_requested #1342
  • Sep 16 09:58
    Bycob labeled #1342
  • Sep 16 09:58
    Bycob labeled #1342
  • Sep 16 09:58
    Bycob labeled #1342
  • Sep 16 09:58
    Bycob opened #1342
  • Sep 16 09:27
    mergify[bot] review_requested #1340
  • Sep 16 09:27
    mergify[bot] review_requested #1340
  • Sep 16 09:27
    Bycob labeled #1340
Emmanuel Benazera
@beniz
you don't need CUDNN if you have no GPU
For a CPU only build, this should work fine : cmake .. -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DBUILD_TESTS=OFF -DUSE_CPU_ONLY=ON -DUSE_HTTP_SERVER_OATPP=ON -DUSE_HTTP_SERVER=OFF -DBUILD_SPDLOG=ON
you may want to completely wipe your build/ dir and start a build from scratch.
Ananya Chaturvedi
@ananyachat
oh okay
I'll try to start a build from scratch
thanks
It worked!
thanks a ton
Emmanuel Benazera
@beniz
you're welcome
Romain Guilmont
@rguilmont
Hello guys ! hope you're doing well :) Bad news : the memory leak we talked about weeks ago is still there... But good news, with @YaYaB we identified exactly the cause, here you have it :) jolibrain/deepdetect#1316
Emmanuel Benazera
@beniz
Hi @rguilmont sure thanks, someone will look into the gzip deflate. Maybe the code has changed while integrating the new oatpp layer.
This is vacation time, so this may take some time.
Romain Guilmont
@rguilmont
Enjoy your vacations ;)
YaYaB
@YaYaB
Hello guys! Hope you get some sun where you are ^^
I've spotted an issue related to the last version of DD using tensorrt and a detection model (refinedet), a script is added to replicate.
jolibrain/deepdetect#1324
Thanks in advance for your help :)
dgtlmoon
@dgtlmoon

Any recommendations for multi label classification instead of building models for different classification classes? (say color and size), is it just a lmdb extraction hack? I see there is

multi_label    bool    yes    false    whether to setup a multi label image task (caffe only)

but I'm unsure if there's something extra todo in the training stage? or what to expect etc

Emmanuel Benazera
@beniz
Hello, multi label is when labels are not mutually exclusively. I believe we haven't used this for some time as datasets are rare in our industries.
Format is image path and class values separated by spaces.
It may we'll go wrong, let us know here.
dgtlmoon
@dgtlmoon
thanks!
dgtlmoon
@dgtlmoon

Something up?

ProBook-440-G7:~/deepdetect/code/cpu$ CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.25.8) or chardet (2.3.0) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Pulling platform_ui (jolibrain/platform_ui:v0.18.0)...
ERROR: manifest for jolibrain/platform_ui:v0.18.0 not found: manifest unknown: manifest unknown
dgtlmoon@dgtlmoon-HP-ProBook-440-G7:~/deepdetect/code/cpu$

last commit is

commit 36caf768b6e57c03fd3132495f1834bf9fc58608
Author: Emmanuel Benazera <emmanuel.benazera@jolibrain.com>
Date:   Fri Jun 11 17:03:14 2021 +0200

    feat(platform_ui): upgrade to v0.18.0
dgtlmoon
@dgtlmoon
@alx looks like you forgot to push v0.18.0 to dockerhub, Ive created a nice little PR, hope it helps https://github.com/jolibrain/platform_ui/pull/29/files
Emmanuel Benazera
@beniz
hi @dgtlmoon you can use the last v0.17.6, it's because our release system does not push when two versions are identical :(
you need to modify the docker-compose file
dgtlmoon
@dgtlmoon
@beniz yeah but this is no good for new people to the project.. it's always nice when the documentation works out of the box right?
two versions are identical... mmm.. but thats ok right? you can have the same tag (v0.18.0) in different git repos for different parts of your project (the platform_ui, and other components)
Alexandre Girard
@alx
Hi @dgtlmoon , thanks for your feedback, v0.18.0 has been pushed on dockerhub: https://hub.docker.com/r/jolibrain/platform_ui/tags
dgtlmoon
@dgtlmoon
Glad to be of help :)
image.png
dgtlmoon
@dgtlmoon
@alx one more thing..
$ sh update.sh
update.sh: 8: function: not found
-e example with *another_cpu* project name: ./update.sh -p another_cpu

update.sh: 10: Syntax error: "}" unexpected
Alexandre Girard
@alx
https://github.com/jolibrain/dd_platform_docker/issues/56#issuecomment-833396315 similar issue, can be fixed by using bash instead of sh, I'll fix the documentation
dgtlmoon
@dgtlmoon
@alx awesome, yeah, best is if the docs are fixed :)
dgtlmoon
@dgtlmoon
@alx question, https://github.com/jolibrain/platform_data/blob/main/Dockerfile why does this use rsync at all to copy? why not just extract to the target anyway?
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C models/
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C "/platform/models/
Alexandre Girard
@alx
@dgtlmoon I dont remember the precise issues we had around fetching these large archives, but we had some and this workflow worked for us at the time so we havn't modified it since then
dgtlmoon
@dgtlmoon

worked fine in jolibrain/deepdetect_gpu:v0.14.0 but in v0.15.0

$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.15.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=10532 /var/lib/docker/overlay2/173ecbc018515b1d7e625d8c63e0e20a3ba60c6505327129636ecabbea9079ba/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled

Maybe a change in cuda version requirements? 10.2 here

$ nvidia-smi
Tue Aug 17 07:14:21 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   29C    P8     8W / 180W |    310MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
Emmanuel Benazera
@beniz
nvidia drivers are incompabible with these old images, use 0.18
dgtlmoon
@dgtlmoon
@beniz same error using tag :v0.18.0
$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.18.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=4936 /var/lib/docker/overlay2/88d414554261b3ec66d24267bb784ceee1c9d562d01b388acee1d1d49444a314/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
Emmanuel Benazera
@beniz
your environment issue then, try running nvidia-smi
and running it from a docker as well
dgtlmoon
@dgtlmoon

$ docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
Tue Aug 17 12:49:10 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   25C    P8     8W / 180W |    261MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
all fine with my environment
$ nvidia-smi
Tue Aug 17 08:48:19 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   26C    P8     8W / 180W |    261MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1400      G   /usr/lib/xorg/Xorg                           127MiB |
|    0      1764      G   /usr/bin/gnome-shell                         130MiB |
+-----------------------------------------------------------------------------+
Emmanuel Benazera
@beniz
try cuda:11.3
new drivers are 465.xxx+
dgtlmoon
@dgtlmoon
thanks!
dgtlmoon
@dgtlmoon

@alx I like this project a lot..

but we had some and this workflow worked for us at the time so we havn't modified it since then

I could rework the workflow and make it a little more automated for github to publish releases, see if we can streamline those builds a little more etc

Carl
@ComputerCarl
Hello all. I am going to send a bunch photos to face (faces service) detection by dataurl. I would like to send as little data as possible via the network.
My questions are:
  • What size can I reduce an image down to and/or lower the quality to before the service can no longer accurately predict the results? I imagine this changes with the amount of faces in an image; like should each face be at least 128 pixels?
  • Is the image already reduced going into the model? E.g, I send a 1920x image and it's automatically being scaled to some size? Is this documented in the API, and what would the setting be labeled?
Emmanuel Benazera
@beniz
Hello, images are automatic resized. I believe the face model used 512x512 images but you can send lower res.
Automatically/uses
AL
@ohhal
Is there a substitute for this product?
Emmanuel Benazera
@beniz
not for us ;)
The C++ toolchain is actually undergoing vast improvements, dealing with real-time processing, the PRs should be in for 0.20