Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 13:46
    Bycob synchronize #1342
  • 09:19

    mergify[bot] on master

    feat: DETR export and inference… (compare)

  • 09:19
    mergify[bot] labeled #1344
  • 09:19
    mergify[bot] closed #1344
  • 08:34
    Bycob synchronize #1342
  • 08:31
    Bycob synchronize #1342
  • Sep 20 21:44
    mergify[bot] review_requested #1344
  • Sep 20 21:44
    mergify[bot] review_requested #1344
  • Sep 20 21:43
    beniz labeled #1344
  • Sep 20 21:43
    beniz labeled #1344
  • Sep 20 21:43
    beniz labeled #1344
  • Sep 20 21:43
    beniz labeled #1344
  • Sep 20 21:43
    beniz assigned #1344
  • Sep 20 21:43
    beniz opened #1344
  • Sep 20 21:40

    beniz on feat_detr_torch_inference

    feat: DETR export and inference… (compare)

  • Sep 20 19:07

    mergify[bot] on master

    feat: chain uses dto end to end… (compare)

  • Sep 20 19:07
    mergify[bot] closed #1340
  • Sep 20 17:15
    mergify[bot] synchronize #1340
  • Sep 20 17:15
    mergify[bot] labeled #1340
  • Sep 20 15:53
    Bycob synchronize #1342
Emmanuel Benazera
@beniz
yes the id is stored of course and its the URL, let me know @Bhavik_samcom_gitlab if you have issues getting it back from the API. So the idea is yiu control it via the URL that is a UUID.
You can keep a matching table outside of DD between URLs and you internal identifier.
YaYaB
@YaYaB
Hey DD's team :)
I am trying to play a bit with tensorrt.
Do you have somewhere the compatible models from caffe?
I already used the googlenet, resnet18 and ssd however it does not seem to work for:
  • resnext
    TensorRT does not support in-place operations on input tensors in a prototxt file.
    [2021-03-17 09:55:52.917] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • se_resnext
    [2021-03-17 09:42:18.603] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    [2021-03-17 09:42:18.606] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Axpy
    [2021-03-17 09:42:18.606] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
  • efficient model
    [2021-03-17 09:33:45.133] [imageserv_resnet] [info] Bias weights are not set yet. Bias weights can be set using setInput(2, bias_tensor) API call.
    could not parse layer type Swish
    [2021-03-17 09:33:45.138] [imageserv_resnet] [error] mllib internal error: Error while parsing caffe model for conversion to TensorRT
Emmanuel Benazera
@beniz
Hi @YaYaB the caffe to trt parser may not support all layers. As for the in-place operations in the resnext architecture (without se_), it may not come from DD right ?
Efficientnet is not efficient at all in practice, as a side note, you better not use it.
YaYaB
@YaYaB
For resnext I saw some people able to use it with trt from tensorflow. Maybe the caffe implementation in the proto may be problematic. Well noted for EfficientNet!
Do you plan on integrating external tensorrt models, for instance from torch, etc. ?
Emmanuel Benazera
@beniz
torch to trt is hard from C++, it is more or less easily done by loading the weights back with a Python script that describes an identical architecture, e.g. easy from torchvision.
YaYaB
@YaYaB
Ok thanks for the info, cheers!
Romain Guilmont
@rguilmont
Hello there :) Hope all DD team is doing great !
I'm starting the prometheus exporter ( i'll soon give you details @beniz with link to the repo/docker image ). I just have a doubt on 2 metrics: total_transform_duration_ms and total_predict_duration_ms. Does predict duration includes the transform duration ? It looks like it does.
Emmanuel Benazera
@beniz
hi @rguilmont yes it does.
Romain Guilmont
@rguilmont
perfect, thanks !
Romain Guilmont
@rguilmont
image.png
Here's a draft of Grafana dashboard that uses prometheus metrics from deepdetect exporter
Emmanuel Benazera
@beniz
beautiful :)
Romain Guilmont
@rguilmont
Hey guys ! I have noticed a memory leak ( ram, not gpu memory ) on latest 0.15 DeepDetect. Before investigating more, is it something you're already aware of ?
Emmanuel Benazera
@beniz
hello, probably not, you can explain it here or in an issue.
Romain Guilmont
@rguilmont
I'll do an issue, i tried to identify clearly which kind of requests caused the leak and i was not able to yet.
Romain Guilmont
@rguilmont
jolibrain/deepdetect#1260 here's the issue. Unfortunately it's not perfect but i hope it can help you to pin-point the issue
tinco
@tinco:matrix.org
[m]
hi! I'm looking to setup a low code system for training object detection models, and it seems deep detect might fit the bill
would it be easy to integrate new architectures that are currently not explicitly supported by deepdetect? for example detectron2 on caffe2?
Emmanuel Benazera
@beniz
hi @tinco we used to support detectron2 with caffe2, it's deprecated now. If your task is object detection, DD comes with plenty of other battle-tested architectures.
tinco
@tinco:matrix.org
[m]
ah ok, I'm not super up to speed on what's the latest and greatest, I'm mostly hoping to enable our team to run experiments with different models themselves
Emmanuel Benazera
@beniz
DD has light models for simple problems/embedded/high fps applications, and larger models for more complicated problems.
tinco
@tinco:matrix.org
[m]
for the past couple years we've been working with a proprietary system that first worked with resnet and now yolov4, and our researcher is training it to segment buildings into components
Emmanuel Benazera
@beniz
so object detection + segmentation ?
tinco
@tinco:matrix.org
[m]
yeah, that's why detectron2 appealed to me, they've got this cover photo with really neat segmentation
Emmanuel Benazera
@beniz
what we'd do with DD is detection with refinedet_512 then apply a segmenter to every object, using a chain, but that's different than detectron2 that does both in a single pass.
tinco
@tinco:matrix.org
[m]
ah right
so why did you deprecate detectron2, was it not used a lot or does it perform less well than the alternatives more most use cases?
Emmanuel Benazera
@beniz
because pytorch did basically deprecate caffe2
tinco
@tinco:matrix.org
[m]
ahh alright, thanks for catching me up haha :D
Emmanuel Benazera
@beniz
Also, very don't see may semantic segmentation models with our customers. I believe this is due to labeling costs. We like to automate the labeling steps ;)
tinco
@tinco:matrix.org
[m]
we're paying students haha, and our customers are actually paying for the labeling already, we're trying to optimize the process
Emmanuel Benazera
@beniz
sure
tinco
@tinco:matrix.org
[m]
so what's your business model, do you sell consultancy around deepdetect, or is it a tool you use to implement machine learning at your clients?
Emmanuel Benazera
@beniz
yes, we are mostly a service company, we serve large corps mostly on complex problems, when there's no product on the shelves, or when there's not much litterature on whether a problem can be solved with ML/DL/RL. DD is the tool that embeds everything we have solved, and that goes into production.
tinco
@tinco:matrix.org
[m]
very cool, thanks for sharing!
Emmanuel Benazera
@beniz
no worries, we've got several requests for yolo models recently, so they might make it into the framework soon.
As for semantic seg, the path for us will be through our torch C++ backend, here again depending on requests and usage.
you can open issues on github for feature requests
tinco
@tinco:matrix.org
[m]
hey, so I just noticed that yolov5 is in pytorch, does that mean the model could just be dropped through a model repository in deep detect, or will there be some code needed as well?
Emmanuel Benazera
@beniz
almost... we've looked at it recently, and the ultralytics repo has code that makes it a bit more tricky, typically there's a bbox filtering step that they did put outside the model, which is weird, and that would need to recoded, that's for the detail.
we've got the request several times now, so we'll try to have an answer to yolov5 :)
tinco
@tinco:matrix.org
[m]
there's no python in deepdetect at all is there? if there was it would be a cool feature to have python based plugins that you could use to preprocess/post process data and add support for little things like that, though of course that's a never ending story with native extensions and such
Emmanuel Benazera
@beniz
it's full C++ yes, there's a python client.
Ananya Chaturvedi
@ananyachat

Hi, I am having trouble in following the instructions on the quickstart page of deep detect. I am using the option "build from source (Ubuntu 18.04 TLS)".

At the step with cmake command after moving to the folder /deepdetect/build, I am getting an error that "Building with tensorflow AND torch can't be build together". I am getting this error no matter what backend option I choose.

P.S.: I have macbook, so in order to use linux on my laptop, I am using a virtual linux instance created by my company for me.
Can someone please help me with this?
Screenshot 2021-05-05 at 2.44.15 PM.png