Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 23 06:49
    mergify[bot] review_requested #1361
  • Oct 23 06:49
    mergify[bot] review_requested #1361
  • Oct 23 06:49
    beniz labeled #1361
  • Oct 23 06:49
    beniz labeled #1361
  • Oct 23 06:49
    beniz assigned #1361
  • Oct 23 06:49
    beniz opened #1361
  • Oct 23 06:49

    beniz on chore_torch_1_10_0

    chore: torch 1.10.0 and torchvi… (compare)

  • Oct 22 17:56

    mergify[bot] on master

    chore: libtorch 1.9.1 and torch… (compare)

  • Oct 22 17:56
    mergify[bot] closed #1358
  • Oct 22 17:09
    Bycob commented #1357
  • Oct 22 15:49
    mergify[bot] synchronize #1358
  • Oct 22 15:49
    mergify[bot] labeled #1358
  • Oct 22 15:49

    mergify[bot] on chore_torch_191

    chore(ci): build docker tensorr… chore: add measure sampling as … Merge branch 'master' into chor… (compare)

  • Oct 22 15:36
    Bycob edited #1360
  • Oct 22 15:36
    mergify[bot] review_requested #1360
  • Oct 22 15:36
    mergify[bot] review_requested #1360
  • Oct 22 15:35
    Bycob labeled #1360
  • Oct 22 15:35
    Bycob synchronize #1360
  • Oct 22 15:28
    Bycob labeled #1360
  • Oct 22 15:28
    Bycob labeled #1360
Emmanuel Benazera
@beniz
It may we'll go wrong, let us know here.
dgtlmoon
@dgtlmoon
thanks!
dgtlmoon
@dgtlmoon

Something up?

ProBook-440-G7:~/deepdetect/code/cpu$ CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.25.8) or chardet (2.3.0) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Pulling platform_ui (jolibrain/platform_ui:v0.18.0)...
ERROR: manifest for jolibrain/platform_ui:v0.18.0 not found: manifest unknown: manifest unknown
dgtlmoon@dgtlmoon-HP-ProBook-440-G7:~/deepdetect/code/cpu$

last commit is

commit 36caf768b6e57c03fd3132495f1834bf9fc58608
Author: Emmanuel Benazera <emmanuel.benazera@jolibrain.com>
Date:   Fri Jun 11 17:03:14 2021 +0200

    feat(platform_ui): upgrade to v0.18.0
dgtlmoon
@dgtlmoon
@alx looks like you forgot to push v0.18.0 to dockerhub, Ive created a nice little PR, hope it helps https://github.com/jolibrain/platform_ui/pull/29/files
Emmanuel Benazera
@beniz
hi @dgtlmoon you can use the last v0.17.6, it's because our release system does not push when two versions are identical :(
you need to modify the docker-compose file
dgtlmoon
@dgtlmoon
@beniz yeah but this is no good for new people to the project.. it's always nice when the documentation works out of the box right?
two versions are identical... mmm.. but thats ok right? you can have the same tag (v0.18.0) in different git repos for different parts of your project (the platform_ui, and other components)
Alexandre Girard
@alx
Hi @dgtlmoon , thanks for your feedback, v0.18.0 has been pushed on dockerhub: https://hub.docker.com/r/jolibrain/platform_ui/tags
dgtlmoon
@dgtlmoon
Glad to be of help :)
image.png
dgtlmoon
@dgtlmoon
@alx one more thing..
$ sh update.sh
update.sh: 8: function: not found
-e example with *another_cpu* project name: ./update.sh -p another_cpu

update.sh: 10: Syntax error: "}" unexpected
Alexandre Girard
@alx
https://github.com/jolibrain/dd_platform_docker/issues/56#issuecomment-833396315 similar issue, can be fixed by using bash instead of sh, I'll fix the documentation
dgtlmoon
@dgtlmoon
@alx awesome, yeah, best is if the docs are fixed :)
dgtlmoon
@dgtlmoon
@alx question, https://github.com/jolibrain/platform_data/blob/main/Dockerfile why does this use rsync at all to copy? why not just extract to the target anyway?
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C models/
RUN wget -O - https://www.deepdetect.com/downloads/platform/pretrained_latest.tar.gz | tar zxf - -C "/platform/models/
Alexandre Girard
@alx
@dgtlmoon I dont remember the precise issues we had around fetching these large archives, but we had some and this workflow worked for us at the time so we havn't modified it since then
dgtlmoon
@dgtlmoon

worked fine in jolibrain/deepdetect_gpu:v0.14.0 but in v0.15.0

$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.15.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=10532 /var/lib/docker/overlay2/173ecbc018515b1d7e625d8c63e0e20a3ba60c6505327129636ecabbea9079ba/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled

Maybe a change in cuda version requirements? 10.2 here

$ nvidia-smi
Tue Aug 17 07:14:21 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   29C    P8     8W / 180W |    310MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
Emmanuel Benazera
@beniz
nvidia drivers are incompabible with these old images, use 0.18
dgtlmoon
@dgtlmoon
@beniz same error using tag :v0.18.0
$ nvidia-docker   run jolibrain/deepdetect_gpu:v0.18.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=11.1 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 --pid=4936 /var/lib/docker/overlay2/88d414554261b3ec66d24267bb784ceee1c9d562d01b388acee1d1d49444a314/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
Emmanuel Benazera
@beniz
your environment issue then, try running nvidia-smi
and running it from a docker as well
dgtlmoon
@dgtlmoon

$ docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
Tue Aug 17 12:49:10 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   25C    P8     8W / 180W |    261MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
all fine with my environment
$ nvidia-smi
Tue Aug 17 08:48:19 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P5000        Off  | 00000000:00:05.0  On |                  Off |
| 26%   26C    P8     8W / 180W |    261MiB / 16278MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1400      G   /usr/lib/xorg/Xorg                           127MiB |
|    0      1764      G   /usr/bin/gnome-shell                         130MiB |
+-----------------------------------------------------------------------------+
Emmanuel Benazera
@beniz
try cuda:11.3
new drivers are 465.xxx+
dgtlmoon
@dgtlmoon
thanks!
dgtlmoon
@dgtlmoon

@alx I like this project a lot..

but we had some and this workflow worked for us at the time so we havn't modified it since then

I could rework the workflow and make it a little more automated for github to publish releases, see if we can streamline those builds a little more etc

Carl
@ComputerCarl
Hello all. I am going to send a bunch photos to face (faces service) detection by dataurl. I would like to send as little data as possible via the network.
My questions are:
  • What size can I reduce an image down to and/or lower the quality to before the service can no longer accurately predict the results? I imagine this changes with the amount of faces in an image; like should each face be at least 128 pixels?
  • Is the image already reduced going into the model? E.g, I send a 1920x image and it's automatically being scaled to some size? Is this documented in the API, and what would the setting be labeled?
Emmanuel Benazera
@beniz
Hello, images are automatic resized. I believe the face model used 512x512 images but you can send lower res.
Automatically/uses
AL
@ohhal
Is there a substitute for this product?
Emmanuel Benazera
@beniz
not for us ;)
The C++ toolchain is actually undergoing vast improvements, dealing with real-time processing, the PRs should be in for 0.20
YaYaB
@YaYaB
Hello !!
I've found a issue related to chain request (quite important). I've put all the details in the issue and a script to replicate it
jolibrain/deepdetect#1348
Louis Jean
@Bycob
Hi @YaYaB, I could reproduce the bug on v0.19.0, however we refactored the chains recently and it looks like the bug is solved if you use the latest master.
You can try on latest dockers (tag ci-master) to check that the bug is gone.
dgtlmoon
@dgtlmoon
Long time no see :) https://www.deepdetect.com/blog/18-torch-detection/ "The rational is that easier object detection tasks can use lighter architectures and unlock high FPS, i.e. up around ~1500FPS on a single GPU" that's incredible !
dgtlmoon
@dgtlmoon
thoughts - maybe instead of having a chain which is object detector ->then classifier, I should just be training the object detector with each class? like red tshirt, black tshirt, etc
Emmanuel Benazera
@beniz
Hi @dgtlmoon thanks for the kind words. Usually the object detector would do the classification just right as well as the localization. If you'd like to combine networks instead you can train a detector with a single class instead.
Jérémy PUBS
@jaimepaslespubs_gitlab

Hello
I try to run livedetect on my Raspberry 3
I followed this tutorial
https://github.com/jolibrain/livedetect/wiki/Step-by-step-for-Raspberry-Pi-3

And run this command
./livedetect-rpi3 \
--port 8080 \
--host 127.0.0.1 \
--mllib ncnn \
--width 300 --height 300 \
--detection \
--create --repository /opt/models/voc/ \
--init "https://www.deepdetect.com/models/init/ncnn/squeezenet_ssd_voc_ncnn_300x300.tar.gz" \
--confidence 0.3 \
-v INFO \
-P "0.0.0.0:8888" \
--service voc \
--nclasses 21

I change the permission
sudo chown pi. ./models
sudo chmod 777 ./models

The output console message is :
[✔] [INFO] Creating service..
2021/10/13 21:35:12 creation status= 400
[✖] [ERROR] Unable to create service!
[✖] [ERROR] Error: BadRequest
[✔] [INFO] Starting capture
[✔] [INFO] Device: /dev/video0
[✔] [INFO] Starting web preview on 0.0.0.0:8888
=========================================================================================================================================================================================[✖] [WARNING] Model size specified as parameters can't be use for capture with this camera.
[✖] [WARNING] Input image will be resized during DeepDetect processing.
[✔] [1] [INFO] Processing image 2021-10-13-21-35-13
[✔] [1] [INFO] Picture processed in 54.332765ms
=========================================================================================================================================================================================[✔] [2] [INFO] Processing image 2021-10-13-21-35-13
[✔] [2] [INFO] Picture processed in 28.805961ms
=========================================================================================================================================================================================[✔] [3] [INFO] Processing image 2021-10-13-21-35-13
[✔] [3] [INFO] Picture processed in 31.212882ms
...

The docker log is not interesting :
[2021-10-13 20:35:12.557] [api] [info] Downloading init model https://www.deepdetect.com/models/init/ncnn/squeezenet_ssd_voc_ncnn_300x300.tar.gz
[2021-10-13 20:35:12.719] [api] [error] 172.17.0.1 "PUT /services/voc" 400 161
[2021-10-13 20:35:13.185] [api] [error] 172.17.0.1 "POST /predict" 400 0
[2021-10-13 20:35:13.216] [api] [error] 172.17.0.1 "POST /predict" 400 0
[2021-10-13 20:35:13.250] [api] [error] 172.17.0.1 "POST /predict" 400 0
[2021-10-13 20:35:13.283] [api] [error] 172.17.0.1 "POST /predict" 400 0

The directory $HOME/models/voc is create after the livedetect-rpi3 command.

Where is my mistake ?

Emmanuel Benazera
@beniz
It says the service cannot be created. Also the docker is very certainly completely outdated, though it should work anyways I guess.
We're not building and pushing the rpi3 docker automatically these days.
BadRequesr usually means the API call has an issue.
dgtlmoon
@dgtlmoon
@beniz I'm still a fan of squeezenet for what i'm doing (seems to yield the best results), any suggestions for other pretrained nets to try thats come along in the last 12 months?

If you'd like to combine networks instead you can train a detector with a single class instead.

hmm dont quite get this, how would that do classification? (I'm dealing with about 10 or so classes atm)

@beniz https://github.com/jolibrain/livedetect/wiki/Step-by-step-for-Raspberry-Pi-3

Also the docker is very certainly completely outdated, though it should work anyways I guess.
Would be worth putting this information on that page to save other people time incase of an issue

Jérémy PUBS
@jaimepaslespubs_gitlab

Hi,

I try to build a docker image for a Raspberry device.

By following the instructions here : https://github.com/jolibrain/deepdetect/tree/master/docker
I had these mistakes :
Could not find the following Boost libraries
CMake 3.14 or higher is required. You are running version 3.10.2

The installation of the system package libboost-all-dev resolve the first problem.
For cmake, I have successfully compiled teh version 3.21.3.

I try to build a deepdetect, but there is an error :
FATAL,USE_FAISS=ON needs CUDA installed
CMake Error at /usr/local/share/cmake-3.21/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find CUDNN (missing: CUDNN_LIBRARY CUDNN_INCLUDE_DIR)
Call Stack (most recent call first):
/usr/local/share/cmake-3.21/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
cmake/FindCUDNN.cmake:92 (find_package_handle_standard_args)
CMakeLists.txt:361 (find_package)

I launch with option USE_FAISS=OFF, same problem
cmake .. -DUSE_NCNN=ON -DRPI3=ON -DUSE_HDF5=OFF -DUSE_CAFFE=OFF -DRELEASE=OFF -USE_FAISS=OFF

Do you have a lead to help me ?

Emmanuel Benazera
@beniz
Hi, -DUSE_SIMSEARCH=OFF ?
There's the build.sh script at the repository's root as well if that's easier.
If your rpi3 build does not succeed, please open an issue, someone will look at it. We're not using the pi these days.
Emmanuel Benazera
@beniz
Also you appear to be missing the -D with USE_FAISS