Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 28 16:11
    beniz review_requested #1398
  • Jan 28 16:11
    beniz review_requested #1398
  • Jan 28 16:11
    beniz labeled #1398
  • Jan 28 16:11
    beniz assigned #1398
  • Jan 28 16:11
    beniz opened #1398
  • Jan 28 16:10

    beniz on fix_torch_map_bbox

    fix: torch MaP with bboxes (compare)

  • Jan 28 08:53

    beniz on master

    fix: bbox clamping in torch inf… (compare)

  • Jan 27 20:33
    beniz synchronize #1397
  • Jan 27 20:33

    beniz on chore_yolox_export_fname

    chore: yolox export filenames a… (compare)

  • Jan 27 20:25
    beniz synchronize #1397
  • Jan 27 20:25

    beniz on chore_yolox_export_fname

    chore: yolox export filenames a… (compare)

  • Jan 27 17:17
    mergify[bot] review_requested #1397
  • Jan 27 17:17
    mergify[bot] review_requested #1397
  • Jan 27 17:17
    beniz labeled #1397
  • Jan 27 17:17
    beniz labeled #1397
  • Jan 27 17:17
    beniz labeled #1397
  • Jan 27 17:17
    beniz labeled #1397
  • Jan 27 17:17
    beniz assigned #1397
  • Jan 27 17:17
    beniz opened #1397
  • Jan 27 17:16

    beniz on chore_yolox_export_fname

    chore: yolox export filenames a… (compare)

dgtlmoon
@dgtlmoon
not sure if it's me just yet
Emmanuel Benazera
@beniz
docker?
dgtlmoon
@dgtlmoon
dd@afe456179096:/opt/deepdetect/build/main$ ./dede
./dede: error while loading shared libraries: libprotobuf.so.3.11.4.0: cannot open shared object file: No such file or directory
in the dede docker is
dd@afe456179096:/opt/deepdetect/build/main$ dpkg -l|grep libprotobuf
ii  libprotobuf10:amd64             3.0.0-9.1ubuntu1                    amd64        protocol buffers C++ library

dd@afe456179096:/opt/deepdetect/build/main$ cat /var/lib/dpkg/info/libprotobuf10\:amd64.list 
/usr/lib/x86_64-linux-gnu/libprotobuf.so.10.0.0
dgtlmoon
@dgtlmoon
dd@afe456179096:/opt/deepdetect/build/main$ find /|grep libprotobuf            
/opt/deepdetect/build/lib/libprotobuf.so
/opt/deepdetect/build/lib/libprotobuf.so.3.11.4.0
/opt/deepdetect/build/lib/libprotobuf-lite.so
/opt/deepdetect/build/lib/libprotobuf-lite.so.3.11.4.0
your ldpath config is broken perhaps
Emmanuel Benazera
@beniz
hi @dgtlmoon protobuf is built internally to avoid conflicts, what docker are you using ?
dgtlmoon
@dgtlmoon
good morning :) I'm using the current jolibrain/deepdetect_cpu, just ran a docker-compose pull
dd@afe456179096:/opt/deepdetect/build/main$ ldconfig -v 2>/dev/null | grep -v ^$'\t'
/usr/local/lib:
/lib/x86_64-linux-gnu:
/usr/lib/x86_64-linux-gnu:
/lib:
/usr/lib:
I'm using my own start-up script by the way.. ahhh
Emmanuel Benazera
@beniz
I believe the docker containers are now tested by our CI so this should not happen. Can I reproduce ?
dgtlmoon
@dgtlmoon
I think it's because I'm starting the container this way....
  deepdetect:
    image: jolibrain/deepdetect_cpu
    command: bash -c 'LD_LIBRARY_PATH=/opt/deepdetect/build/lib/; export LD_LIBRARY_PATH; ./dede -host 0.0.0.0 & sleep 3; /init.sh; wait;'
    container_name: tss_dd
    volumes:
      # Curl doesnt exist there and we arent root :(
      - ./deepdetect/curl:/usr/bin/curl
      - ./deepdetect/init.sh:/init.sh
      - ./deepdetect/models:/models
      - ./deepdetect/models-classifier:/models-classifier
      - ./deepdetect/models-tag-classifier:/models-tag-classifier

    expose:
      - 8080
    networks:
      - tssnet
    restart: always
but weird thing is - this only started happening in the last couple of weeks
I added that LD_LIB_PATH config stuff just now - and everything is fine, without it - it wont start
My init.sh creates a few services from within the container, but thinking about it, I dont need to run that from inside the container.... could be my weird brain at work
Emmanuel Benazera
@beniz
sure, no worries, if you see something that would ease usage and that we should add on dd side, let me know
dgtlmoon
@dgtlmoon
thanks :)
YaYaB
@YaYaB
:+1:
cchadowitz-pf
@cchadowitz-pf
:fireworks:
cchadowitz-pf
@cchadowitz-pf
@beniz any insight on if the ci-master docker image is up to date with the v0.13.0 tag?
Mehdi ABAAKOUK
@sileht
We just release the tag 0.13.0, docker images will be built this night (CEST), ci-master have been built last night and is one commit behind 0.13.0
cchadowitz-pf
@cchadowitz-pf
by any chance is this the commit that ci-master is missing? jolibrain/deepdetect@b85d79e
it's the one i was hoping for :sweat_smile:
Mehdi ABAAKOUK
@sileht
that's the one unfortunately
cchadowitz-pf
@cchadowitz-pf
ah well :smile: i'll look forward to the v0.13.0 and/or next ci-master builds! thanks!
cchadowitz-pf
@cchadowitz-pf
just testing v0.13.0 release (i built a docker image locally) - ran into the error I described in #1151
I don't believe I had that error in v0.12.0 (or even some of the ci-master builds between v0.12.0 and now), so I'm guessing something was introduced, or changed in the NCNN master branch upstream?
Emmanuel Benazera
@beniz

Yeah so see the updated issue, the missing patch is not enough, there's something else in the way, that will require deeper debug.

As a side note that caffe OCR models cannot be converted to TRT. We failed for a long time and eventually gave up. OCR is scheduled for the torch backend in the coming weeks. This does not fully relate to the current issue, but thought it'd be worth mentionning.

cchadowitz-pf
@cchadowitz-pf
:+1: I'll stay tuned re: #1151 then. I'm guessing prior to this OCR on NCNN was not used?
thanks for the heads up about caffe OCR on TRT, that's good to know.
as i'm sure you're aware, OCR is definitely one of the slower models at the moment, so anything that can improve that would be awesome on my end :grinning:
Emmanuel Benazera
@beniz
OCR used to work on ncnn.
It's slow because of the lstm. It'd be faster in torch and TRT. Nevertheless it's on my R&D task list to look at vision transformer for OCR.
YaYaB
@YaYaB
Hey guys!
Quick question about detection using libtorch, I know it is not supported yet? Do you plan to make it available pretty soon?
I know that how it is managed i pytorch is pretty awful (type of input and type of output vary from a model to another, etc.) but it would be nice to have something available in DeepDetect, maybe have a custom input / output format for those models
Emmanuel Benazera
@beniz
salut, deformable DETR is on the agenda but that R&D. Basically what runs well with caffe will not be reimplemented shortly as we don't see any reason in our own work. These models basically run CUDNN and transfer to TRT and NCNN nicely. In libtorch we'll implement the nextgen of object detection and we've already started with image classification as ViT and realformer are already within DD.
YaYaB
@YaYaB
Ok thanks!
Bhavik samcom
@Bhavik_samcom_gitlab

For simsearch service I got that I can clear indexed images using
curl -X DELETE "http://localhost:8080/services/simsearch?clear=index"

but is there any way I can remove single image after indexing and building for thousands of images?

P.S.: Indexing performed through urls and not with the physical images, so to remove the image from directory option is not avail I guess

Emmanuel Benazera
@beniz
@Bhavik_samcom_gitlab hi, this is a good question, we should make the answer clearer, here: basically the similarity search indexes don't really support removal, that's a per design/maths issue. Though FAISS kinda does, cf https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#removing-elements-from-an-index But in practice it is very inefficient. For this purpose DD has no direct support for it.
However, the way of doing this properly is to store a list of removed elements within your application, and filter them out from the results. Once you have too many to remove, rebuild the index instead.
In the future we may support FAISS remove_ids but it is so inefficient and dangerous that it may not even happen since it's much cleaner to rebuild the index after a while.
Bhavik samcom
@Bhavik_samcom_gitlab
@beniz Thanks much for your clear explanation.
YaYaB
@YaYaB
Hey guys do you plan to support the usage of the new gpu architecture 'compute_86'?
Emmanuel Benazera
@beniz
You should be able to try for yourself. We are waiting for our A6000, in the meantime, it's easy to test if you have a RTX 30xx
YaYaB
@YaYaB
I've already tried it ^^
#22 1602. CXX src/caffe/layers/lstm_layer.cpp
#22 1607. CXX src/caffe/layers/permute_layer.cpp
#22 1608. CXX src/caffe/layers/deconv_layer.cpp
#22 1612. CXX src/caffe/layers/recurrent_layer.cpp
#22 1613. CXX src/caffe/layers/base_conv_layer.cpp
#22 1615. CXX src/caffe/layers/tanh_layer.cpp
#22 1615. CXX src/caffe/layers/detection_output_layer.cpp
#22 1616. CXX src/caffe/layers/exp_layer.cpp
#22 1621. CXX src/caffe/layers/softmax_loss_layer.cpp
#22 1621. CXX src/caffe/layers/dense_image_data_layer.cpp
#22 1621. NVCC src/caffe/util/im2col.cu
#22 1621. Makefile:624: recipe for target '.build_release/cuda/src/caffe/util/im2col.o' failed
#22 1621. nvcc fatal   : Unsupported gpu architecture 'compute_86'
#22 1621. make[3]: *** [.build_release/cuda/src/caffe/util/im2col.o] Error 1
#22 1621. make[3]: *** Waiting for unfinished jobs....
#22 1642. src/caffe/layers/detection_output_layer.cpp: In member function 'void caffe::DetectionOutputLayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]':
#22 1642. src/caffe/layers/detection_output_layer.cpp:348:10: warning: 'toplogit_data' may be used uninitialized in this function [-Wmaybe-uninitialized]
#22 1642.    Dtype* toplogit_data;
Emmanuel Benazera
@beniz
then you can open an issue, but right here it's your nvcc that seems to not be up to date.
YaYaB
@YaYaB
Arf damned I am such a blind sorry... I've been working with too many different Dockerfile, forgot to use the correct one. My bad... I'll open issue if I encounter any
Emmanuel Benazera
@beniz
sure
cchadowitz-pf
@cchadowitz-pf
quick question - how does deepdetect utilize multiple gpus if a service is created with 'gpuid': -1 or 'gpuid': [0, 1, 2]? in the API docs it seems to indicate it'll 'select among multiple GPUs' but are the multiple gpus utilized as a single large block of GPU memory, or randomly selected, or some other strategy? (I understand this may differ depending on the backend lib in use, and it seems like only caffe, torch, caffe2 support gpuid
or is the multiple gpuid really only used during training, and not inference?
Emmanuel Benazera
@beniz
hi, multi gpu is training only. In inference it is left to the user via client to dispatch to miltiple gpus
cchadowitz-pf
@cchadowitz-pf
:+1: in inference, does DD pick the first available GPU or is the gpuid param used in /predict calls? (and by all backend libs or only the ones i listed above)?
Emmanuel Benazera
@beniz
it picks gpuid or 0 as default