Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 26 10:49
    sileht opened #1376
  • Nov 26 05:34

    mergify[bot] on master

    feat: add yolox onnx export and… (compare)

  • Nov 26 05:34
    mergify[bot] closed #1360
  • Nov 25 22:35
    mergify[bot] synchronize #1360
  • Nov 25 15:52
    Bycob synchronize #1360
  • Nov 24 17:22
    Bycob opened #1375
  • Nov 24 14:16
    Bycob synchronize #1362
  • Nov 24 13:52
    mergify[bot] synchronize #1360
  • Nov 24 13:52
    mergify[bot] commented #1360
  • Nov 24 13:51
    Bycob commented #1360
  • Nov 24 13:02

    mergify[bot] on master

    feat: training segmentation mod… (compare)

  • Nov 24 13:02
    mergify[bot] closed #1373
  • Nov 23 16:33
    Bycob synchronize #1362
  • Nov 23 16:31
    Bycob synchronize #1362
  • Nov 22 14:59
    mergify[bot] synchronize #1373
  • Nov 22 14:59

    mergify[bot] on feat_ml_train_segmentation_torch

    fix: add mltype in metrics.json… Merge branch 'master' into feat… (compare)

  • Nov 22 12:39

    mergify[bot] on master

    fix: add mltype in metrics.json… (compare)

  • Nov 22 12:39
    mergify[bot] closed #1374
  • Nov 19 09:12
    mergify[bot] synchronize #1374
  • Nov 18 19:49
    mergify[bot] synchronize #1373
dgtlmoon
@dgtlmoon
I'm using my own start-up script by the way.. ahhh
Emmanuel Benazera
@beniz
I believe the docker containers are now tested by our CI so this should not happen. Can I reproduce ?
dgtlmoon
@dgtlmoon
I think it's because I'm starting the container this way....
  deepdetect:
    image: jolibrain/deepdetect_cpu
    command: bash -c 'LD_LIBRARY_PATH=/opt/deepdetect/build/lib/; export LD_LIBRARY_PATH; ./dede -host 0.0.0.0 & sleep 3; /init.sh; wait;'
    container_name: tss_dd
    volumes:
      # Curl doesnt exist there and we arent root :(
      - ./deepdetect/curl:/usr/bin/curl
      - ./deepdetect/init.sh:/init.sh
      - ./deepdetect/models:/models
      - ./deepdetect/models-classifier:/models-classifier
      - ./deepdetect/models-tag-classifier:/models-tag-classifier

    expose:
      - 8080
    networks:
      - tssnet
    restart: always
but weird thing is - this only started happening in the last couple of weeks
I added that LD_LIB_PATH config stuff just now - and everything is fine, without it - it wont start
My init.sh creates a few services from within the container, but thinking about it, I dont need to run that from inside the container.... could be my weird brain at work
Emmanuel Benazera
@beniz
sure, no worries, if you see something that would ease usage and that we should add on dd side, let me know
dgtlmoon
@dgtlmoon
thanks :)
YaYaB
@YaYaB
:+1:
cchadowitz-pf
@cchadowitz-pf
:fireworks:
cchadowitz-pf
@cchadowitz-pf
@beniz any insight on if the ci-master docker image is up to date with the v0.13.0 tag?
Mehdi ABAAKOUK
@sileht
We just release the tag 0.13.0, docker images will be built this night (CEST), ci-master have been built last night and is one commit behind 0.13.0
cchadowitz-pf
@cchadowitz-pf
by any chance is this the commit that ci-master is missing? jolibrain/deepdetect@b85d79e
it's the one i was hoping for :sweat_smile:
Mehdi ABAAKOUK
@sileht
that's the one unfortunately
cchadowitz-pf
@cchadowitz-pf
ah well :smile: i'll look forward to the v0.13.0 and/or next ci-master builds! thanks!
cchadowitz-pf
@cchadowitz-pf
just testing v0.13.0 release (i built a docker image locally) - ran into the error I described in #1151
I don't believe I had that error in v0.12.0 (or even some of the ci-master builds between v0.12.0 and now), so I'm guessing something was introduced, or changed in the NCNN master branch upstream?
Emmanuel Benazera
@beniz

Yeah so see the updated issue, the missing patch is not enough, there's something else in the way, that will require deeper debug.

As a side note that caffe OCR models cannot be converted to TRT. We failed for a long time and eventually gave up. OCR is scheduled for the torch backend in the coming weeks. This does not fully relate to the current issue, but thought it'd be worth mentionning.

cchadowitz-pf
@cchadowitz-pf
:+1: I'll stay tuned re: #1151 then. I'm guessing prior to this OCR on NCNN was not used?
thanks for the heads up about caffe OCR on TRT, that's good to know.
as i'm sure you're aware, OCR is definitely one of the slower models at the moment, so anything that can improve that would be awesome on my end :grinning:
Emmanuel Benazera
@beniz
OCR used to work on ncnn.
It's slow because of the lstm. It'd be faster in torch and TRT. Nevertheless it's on my R&D task list to look at vision transformer for OCR.
YaYaB
@YaYaB
Hey guys!
Quick question about detection using libtorch, I know it is not supported yet? Do you plan to make it available pretty soon?
I know that how it is managed i pytorch is pretty awful (type of input and type of output vary from a model to another, etc.) but it would be nice to have something available in DeepDetect, maybe have a custom input / output format for those models
Emmanuel Benazera
@beniz
salut, deformable DETR is on the agenda but that R&D. Basically what runs well with caffe will not be reimplemented shortly as we don't see any reason in our own work. These models basically run CUDNN and transfer to TRT and NCNN nicely. In libtorch we'll implement the nextgen of object detection and we've already started with image classification as ViT and realformer are already within DD.
YaYaB
@YaYaB
Ok thanks!
Bhavik samcom
@Bhavik_samcom_gitlab

For simsearch service I got that I can clear indexed images using
curl -X DELETE "http://localhost:8080/services/simsearch?clear=index"

but is there any way I can remove single image after indexing and building for thousands of images?

P.S.: Indexing performed through urls and not with the physical images, so to remove the image from directory option is not avail I guess

Emmanuel Benazera
@beniz
@Bhavik_samcom_gitlab hi, this is a good question, we should make the answer clearer, here: basically the similarity search indexes don't really support removal, that's a per design/maths issue. Though FAISS kinda does, cf https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#removing-elements-from-an-index But in practice it is very inefficient. For this purpose DD has no direct support for it.
However, the way of doing this properly is to store a list of removed elements within your application, and filter them out from the results. Once you have too many to remove, rebuild the index instead.
In the future we may support FAISS remove_ids but it is so inefficient and dangerous that it may not even happen since it's much cleaner to rebuild the index after a while.
Bhavik samcom
@Bhavik_samcom_gitlab
@beniz Thanks much for your clear explanation.
YaYaB
@YaYaB
Hey guys do you plan to support the usage of the new gpu architecture 'compute_86'?
Emmanuel Benazera
@beniz
You should be able to try for yourself. We are waiting for our A6000, in the meantime, it's easy to test if you have a RTX 30xx
YaYaB
@YaYaB
I've already tried it ^^
#22 1602. CXX src/caffe/layers/lstm_layer.cpp
#22 1607. CXX src/caffe/layers/permute_layer.cpp
#22 1608. CXX src/caffe/layers/deconv_layer.cpp
#22 1612. CXX src/caffe/layers/recurrent_layer.cpp
#22 1613. CXX src/caffe/layers/base_conv_layer.cpp
#22 1615. CXX src/caffe/layers/tanh_layer.cpp
#22 1615. CXX src/caffe/layers/detection_output_layer.cpp
#22 1616. CXX src/caffe/layers/exp_layer.cpp
#22 1621. CXX src/caffe/layers/softmax_loss_layer.cpp
#22 1621. CXX src/caffe/layers/dense_image_data_layer.cpp
#22 1621. NVCC src/caffe/util/im2col.cu
#22 1621. Makefile:624: recipe for target '.build_release/cuda/src/caffe/util/im2col.o' failed
#22 1621. nvcc fatal   : Unsupported gpu architecture 'compute_86'
#22 1621. make[3]: *** [.build_release/cuda/src/caffe/util/im2col.o] Error 1
#22 1621. make[3]: *** Waiting for unfinished jobs....
#22 1642. src/caffe/layers/detection_output_layer.cpp: In member function 'void caffe::DetectionOutputLayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]':
#22 1642. src/caffe/layers/detection_output_layer.cpp:348:10: warning: 'toplogit_data' may be used uninitialized in this function [-Wmaybe-uninitialized]
#22 1642.    Dtype* toplogit_data;
Emmanuel Benazera
@beniz
then you can open an issue, but right here it's your nvcc that seems to not be up to date.
YaYaB
@YaYaB
Arf damned I am such a blind sorry... I've been working with too many different Dockerfile, forgot to use the correct one. My bad... I'll open issue if I encounter any
Emmanuel Benazera
@beniz
sure
cchadowitz-pf
@cchadowitz-pf
quick question - how does deepdetect utilize multiple gpus if a service is created with 'gpuid': -1 or 'gpuid': [0, 1, 2]? in the API docs it seems to indicate it'll 'select among multiple GPUs' but are the multiple gpus utilized as a single large block of GPU memory, or randomly selected, or some other strategy? (I understand this may differ depending on the backend lib in use, and it seems like only caffe, torch, caffe2 support gpuid
or is the multiple gpuid really only used during training, and not inference?
Emmanuel Benazera
@beniz
hi, multi gpu is training only. In inference it is left to the user via client to dispatch to miltiple gpus
cchadowitz-pf
@cchadowitz-pf
:+1: in inference, does DD pick the first available GPU or is the gpuid param used in /predict calls? (and by all backend libs or only the ones i listed above)?
Emmanuel Benazera
@beniz
it picks gpuid or 0 as default
we're opened to suggestions regarding most useful options
cchadowitz-pf
@cchadowitz-pf
:+1: that sounds good, just trying to better understand how it works now to see how to best utilize it :) i think as is is probably perfectly fine, but will let you know if i have additional thoughts. thanks!
Bhavik samcom
@Bhavik_samcom_gitlab

Is there any way to pass the additional parameters with simsearch - indexing, like below
"data":[
"id": "<image_url>",
["id", "category", "<image_url>"]
...
]

I want to perform search of specific category, so these way it get simplified, but unable to figure out the way to do so. can you please guide here the possible ways as well?

Emmanuel Benazera
@beniz
hi @Bhavik_samcom_gitlab the spirit is rather to storz metadata outside of the similarity index. This is because by design we consider DD is not concerned with structured storage. Let me know if this is not clear or if I misunderstood your question
Bhavik samcom
@Bhavik_samcom_gitlab
Hi @beniz Thanks for demonstrating the vision.
I understand, is there not even an option to store just a simple id that will be associated with the image (without other metadata)? I am asking because the only thing returned is an image url which is not necessarily unique or a key I can use in my database for querying.
Emmanuel Benazera
@beniz
yes the id is stored of course and its the URL, let me know @Bhavik_samcom_gitlab if you have issues getting it back from the API. So the idea is yiu control it via the URL that is a UUID.
You can keep a matching table outside of DD between URLs and you internal identifier.
YaYaB
@YaYaB
Hey DD's team :)
I am trying to play a bit with tensorrt.
Do you have somewhere the compatible models from caffe?
I already used the googlenet, resnet18 and ssd however it does not seem to work for:
  • resnext
    TensorRT does not support in-place operations on input tensors in a prototxt file.
    [2021-03-17 09:55:52.917] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • se_resnext
    [2021-03-17 09:42:18.603] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Axpy
    [2021-03-17 09:42:18.606] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • efficient model
    [2021-03-17 09:33:45.133] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Swish
    [2021-03-17 09:33:45.138] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
Emmanuel Benazera
@beniz
Hi @YaYaB the caffe to trt parser may not support all layers. As for the in-place operations in the resnext architecture (without se_), it may not come from DD right ?
Efficientnet is not efficient at all in practice, as a side note, you better not use it.