Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 22 19:10
    Bycob commented #1245
  • Apr 22 19:01
    Bycob synchronize #1245
  • Apr 22 10:05
    mergify[bot] review_requested #1264
  • Apr 22 10:05
    mergify[bot] review_requested #1264
  • Apr 22 10:04
    Bycob labeled #1264
  • Apr 22 10:04
    Bycob labeled #1264
  • Apr 22 10:04
    Bycob labeled #1264
  • Apr 22 10:04
    Bycob labeled #1264
  • Apr 22 10:04
    Bycob opened #1264
  • Apr 19 07:13
    YaYaB commented #1261
  • Apr 18 19:04

    mergify[bot] on master

    fix: services names were not al… (compare)

  • Apr 18 19:04
    mergify[bot] closed #1263
  • Apr 18 17:31
    mergify[bot] synchronize #1263
  • Apr 18 17:31
    mergify[bot] labeled #1263
  • Apr 16 11:28
    fantes synchronize #1263
  • Apr 16 08:26
    fantes edited #1263
  • Apr 16 08:25
    fantes edited #1263
  • Apr 16 08:24
    fantes synchronize #1263
  • Apr 16 06:36
    mergify[bot] review_requested #1263
  • Apr 16 06:36
    mergify[bot] review_requested #1263
cchadowitz-pf
@cchadowitz-pf
by any chance is this the commit that ci-master is missing? jolibrain/deepdetect@b85d79e
it's the one i was hoping for :sweat_smile:
Mehdi ABAAKOUK
@sileht
that's the one unfortunately
cchadowitz-pf
@cchadowitz-pf
ah well :smile: i'll look forward to the v0.13.0 and/or next ci-master builds! thanks!
cchadowitz-pf
@cchadowitz-pf
just testing v0.13.0 release (i built a docker image locally) - ran into the error I described in #1151
I don't believe I had that error in v0.12.0 (or even some of the ci-master builds between v0.12.0 and now), so I'm guessing something was introduced, or changed in the NCNN master branch upstream?
Emmanuel Benazera
@beniz

Yeah so see the updated issue, the missing patch is not enough, there's something else in the way, that will require deeper debug.

As a side note that caffe OCR models cannot be converted to TRT. We failed for a long time and eventually gave up. OCR is scheduled for the torch backend in the coming weeks. This does not fully relate to the current issue, but thought it'd be worth mentionning.

cchadowitz-pf
@cchadowitz-pf
:+1: I'll stay tuned re: #1151 then. I'm guessing prior to this OCR on NCNN was not used?
thanks for the heads up about caffe OCR on TRT, that's good to know.
as i'm sure you're aware, OCR is definitely one of the slower models at the moment, so anything that can improve that would be awesome on my end :grinning:
Emmanuel Benazera
@beniz
OCR used to work on ncnn.
It's slow because of the lstm. It'd be faster in torch and TRT. Nevertheless it's on my R&D task list to look at vision transformer for OCR.
YaYaB
@YaYaB
Hey guys!
Quick question about detection using libtorch, I know it is not supported yet? Do you plan to make it available pretty soon?
I know that how it is managed i pytorch is pretty awful (type of input and type of output vary from a model to another, etc.) but it would be nice to have something available in DeepDetect, maybe have a custom input / output format for those models
Emmanuel Benazera
@beniz
salut, deformable DETR is on the agenda but that R&D. Basically what runs well with caffe will not be reimplemented shortly as we don't see any reason in our own work. These models basically run CUDNN and transfer to TRT and NCNN nicely. In libtorch we'll implement the nextgen of object detection and we've already started with image classification as ViT and realformer are already within DD.
YaYaB
@YaYaB
Ok thanks!
Bhavik samcom
@Bhavik_samcom_gitlab

For simsearch service I got that I can clear indexed images using
curl -X DELETE "http://localhost:8080/services/simsearch?clear=index"

but is there any way I can remove single image after indexing and building for thousands of images?

P.S.: Indexing performed through urls and not with the physical images, so to remove the image from directory option is not avail I guess

Emmanuel Benazera
@beniz
@Bhavik_samcom_gitlab hi, this is a good question, we should make the answer clearer, here: basically the similarity search indexes don't really support removal, that's a per design/maths issue. Though FAISS kinda does, cf https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#removing-elements-from-an-index But in practice it is very inefficient. For this purpose DD has no direct support for it.
However, the way of doing this properly is to store a list of removed elements within your application, and filter them out from the results. Once you have too many to remove, rebuild the index instead.
In the future we may support FAISS remove_ids but it is so inefficient and dangerous that it may not even happen since it's much cleaner to rebuild the index after a while.
Bhavik samcom
@Bhavik_samcom_gitlab
@beniz Thanks much for your clear explanation.
YaYaB
@YaYaB
Hey guys do you plan to support the usage of the new gpu architecture 'compute_86'?
Emmanuel Benazera
@beniz
You should be able to try for yourself. We are waiting for our A6000, in the meantime, it's easy to test if you have a RTX 30xx
YaYaB
@YaYaB
I've already tried it ^^
#22 1602. CXX src/caffe/layers/lstm_layer.cpp
#22 1607. CXX src/caffe/layers/permute_layer.cpp
#22 1608. CXX src/caffe/layers/deconv_layer.cpp
#22 1612. CXX src/caffe/layers/recurrent_layer.cpp
#22 1613. CXX src/caffe/layers/base_conv_layer.cpp
#22 1615. CXX src/caffe/layers/tanh_layer.cpp
#22 1615. CXX src/caffe/layers/detection_output_layer.cpp
#22 1616. CXX src/caffe/layers/exp_layer.cpp
#22 1621. CXX src/caffe/layers/softmax_loss_layer.cpp
#22 1621. CXX src/caffe/layers/dense_image_data_layer.cpp
#22 1621. NVCC src/caffe/util/im2col.cu
#22 1621. Makefile:624: recipe for target '.build_release/cuda/src/caffe/util/im2col.o' failed
#22 1621. nvcc fatal   : Unsupported gpu architecture 'compute_86'
#22 1621. make[3]: *** [.build_release/cuda/src/caffe/util/im2col.o] Error 1
#22 1621. make[3]: *** Waiting for unfinished jobs....
#22 1642. src/caffe/layers/detection_output_layer.cpp: In member function 'void caffe::DetectionOutputLayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]':
#22 1642. src/caffe/layers/detection_output_layer.cpp:348:10: warning: 'toplogit_data' may be used uninitialized in this function [-Wmaybe-uninitialized]
#22 1642.    Dtype* toplogit_data;
Emmanuel Benazera
@beniz
then you can open an issue, but right here it's your nvcc that seems to not be up to date.
YaYaB
@YaYaB
Arf damned I am such a blind sorry... I've been working with too many different Dockerfile, forgot to use the correct one. My bad... I'll open issue if I encounter any
Emmanuel Benazera
@beniz
sure
cchadowitz-pf
@cchadowitz-pf
quick question - how does deepdetect utilize multiple gpus if a service is created with 'gpuid': -1 or 'gpuid': [0, 1, 2]? in the API docs it seems to indicate it'll 'select among multiple GPUs' but are the multiple gpus utilized as a single large block of GPU memory, or randomly selected, or some other strategy? (I understand this may differ depending on the backend lib in use, and it seems like only caffe, torch, caffe2 support gpuid
or is the multiple gpuid really only used during training, and not inference?
Emmanuel Benazera
@beniz
hi, multi gpu is training only. In inference it is left to the user via client to dispatch to miltiple gpus
cchadowitz-pf
@cchadowitz-pf
:+1: in inference, does DD pick the first available GPU or is the gpuid param used in /predict calls? (and by all backend libs or only the ones i listed above)?
Emmanuel Benazera
@beniz
it picks gpuid or 0 as default
we're opened to suggestions regarding most useful options
cchadowitz-pf
@cchadowitz-pf
:+1: that sounds good, just trying to better understand how it works now to see how to best utilize it :) i think as is is probably perfectly fine, but will let you know if i have additional thoughts. thanks!
Bhavik samcom
@Bhavik_samcom_gitlab

Is there any way to pass the additional parameters with simsearch - indexing, like below
"data":[
"id": "<image_url>",
["id", "category", "<image_url>"]
...
]

I want to perform search of specific category, so these way it get simplified, but unable to figure out the way to do so. can you please guide here the possible ways as well?

Emmanuel Benazera
@beniz
hi @Bhavik_samcom_gitlab the spirit is rather to storz metadata outside of the similarity index. This is because by design we consider DD is not concerned with structured storage. Let me know if this is not clear or if I misunderstood your question
Bhavik samcom
@Bhavik_samcom_gitlab
Hi @beniz Thanks for demonstrating the vision.
I understand, is there not even an option to store just a simple id that will be associated with the image (without other metadata)? I am asking because the only thing returned is an image url which is not necessarily unique or a key I can use in my database for querying.
Emmanuel Benazera
@beniz
yes the id is stored of course and its the URL, let me know @Bhavik_samcom_gitlab if you have issues getting it back from the API. So the idea is yiu control it via the URL that is a UUID.
You can keep a matching table outside of DD between URLs and you internal identifier.
YaYaB
@YaYaB
Hey DD's team :)
I am trying to play a bit with tensorrt.
Do you have somewhere the compatible models from caffe?
I already used the googlenet, resnet18 and ssd however it does not seem to work for:
  • resnext
    TensorRT does not support in-place operations on input tensors in a prototxt file.
    [2021-03-17 09:55:52.917] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • se_resnext
    [2021-03-17 09:42:18.603] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Axpy
    [2021-03-17 09:42:18.606] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • efficient model
    [2021-03-17 09:33:45.133] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Swish
    [2021-03-17 09:33:45.138] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
Emmanuel Benazera
@beniz
Hi @YaYaB the caffe to trt parser may not support all layers. As for the in-place operations in the resnext architecture (without se_), it may not come from DD right ?
Efficientnet is not efficient at all in practice, as a side note, you better not use it.
YaYaB
@YaYaB
For resnext I saw some people able to use it with trt from tensorflow. Maybe the caffe implementation in the proto may be problematic. Well noted for EfficientNet!
Do you plan on integrating external tensorrt models, for instance from torch, etc. ?
Emmanuel Benazera
@beniz
torch to trt is hard from C++, it is more or less easily done by loading the weights back with a Python script that describes an identical architecture, e.g. easy from torchvision.
YaYaB
@YaYaB
Ok thanks for the info, cheers!
Romain Guilmont
@rguilmont
Hello there :) Hope all DD team is doing great !
I'm starting the prometheus exporter ( i'll soon give you details @beniz with link to the repo/docker image ). I just have a doubt on 2 metrics: total_transform_duration_ms and total_predict_duration_ms. Does predict duration includes the transform duration ? It looks like it does.
Emmanuel Benazera
@beniz
hi @rguilmont yes it does.
Romain Guilmont
@rguilmont
perfect, thanks !
Romain Guilmont
@rguilmont
image.png
Here's a draft of Grafana dashboard that uses prometheus metrics from deepdetect exporter
Emmanuel Benazera
@beniz
beautiful :)
Romain Guilmont
@rguilmont
Hey guys ! I have noticed a memory leak ( ram, not gpu memory ) on latest 0.15 DeepDetect. Before investigating more, is it something you're already aware of ?
Emmanuel Benazera
@beniz
hello, probably not, you can explain it here or in an issue.
Romain Guilmont
@rguilmont
I'll do an issue, i tried to identify clearly which kind of requests caused the leak and i was not able to yet.
Romain Guilmont
@rguilmont
jolibrain/deepdetect#1260 here's the issue. Unfortunately it's not perfect but i hope it can help you to pin-point the issue