Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 26 09:41

    beniz on master

    fix(TRT): fix build wrt new ext… (compare)

  • Jul 26 09:41
    beniz closed #1321
  • Jul 26 09:24
    beniz reopened #1321
  • Jul 26 09:08
    fantes closed #1321
  • Jul 26 09:08
    mergify[bot] review_requested #1321
  • Jul 26 09:08
    mergify[bot] review_requested #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes opened #1321
  • Jul 21 12:17
    rguilmont commented #1316
  • Jul 21 11:50

    mergify[bot] on master

    fix: always depend on oatpp Be… (compare)

  • Jul 21 11:50
    mergify[bot] closed #1320
  • Jul 21 11:50
    mergify[bot] labeled #1320
  • Jul 21 10:07
    mergify[bot] unlabeled #1320
  • Jul 21 10:07
    mergify[bot] synchronize #1320
  • Jul 21 10:07
    mergify[bot] labeled #1320
  • Jul 20 12:35
    lganzzzo commented #1316
  • Jul 20 11:19

    mergify[bot] on master

    fix(torch): predictions handled… (compare)

  • Jul 20 11:19
    mergify[bot] closed #1319
Romain Guilmont
@rguilmont
image.png
Here's a draft of Grafana dashboard that uses prometheus metrics from deepdetect exporter
Emmanuel Benazera
@beniz
beautiful :)
Romain Guilmont
@rguilmont
Hey guys ! I have noticed a memory leak ( ram, not gpu memory ) on latest 0.15 DeepDetect. Before investigating more, is it something you're already aware of ?
Emmanuel Benazera
@beniz
hello, probably not, you can explain it here or in an issue.
Romain Guilmont
@rguilmont
I'll do an issue, i tried to identify clearly which kind of requests caused the leak and i was not able to yet.
Romain Guilmont
@rguilmont
jolibrain/deepdetect#1260 here's the issue. Unfortunately it's not perfect but i hope it can help you to pin-point the issue
tinco
@tinco:matrix.org
[m]
hi! I'm looking to setup a low code system for training object detection models, and it seems deep detect might fit the bill
would it be easy to integrate new architectures that are currently not explicitly supported by deepdetect? for example detectron2 on caffe2?
Emmanuel Benazera
@beniz
hi @tinco we used to support detectron2 with caffe2, it's deprecated now. If your task is object detection, DD comes with plenty of other battle-tested architectures.
tinco
@tinco:matrix.org
[m]
ah ok, I'm not super up to speed on what's the latest and greatest, I'm mostly hoping to enable our team to run experiments with different models themselves
Emmanuel Benazera
@beniz
DD has light models for simple problems/embedded/high fps applications, and larger models for more complicated problems.
tinco
@tinco:matrix.org
[m]
for the past couple years we've been working with a proprietary system that first worked with resnet and now yolov4, and our researcher is training it to segment buildings into components
Emmanuel Benazera
@beniz
so object detection + segmentation ?
tinco
@tinco:matrix.org
[m]
yeah, that's why detectron2 appealed to me, they've got this cover photo with really neat segmentation
Emmanuel Benazera
@beniz
what we'd do with DD is detection with refinedet_512 then apply a segmenter to every object, using a chain, but that's different than detectron2 that does both in a single pass.
tinco
@tinco:matrix.org
[m]
ah right
so why did you deprecate detectron2, was it not used a lot or does it perform less well than the alternatives more most use cases?
Emmanuel Benazera
@beniz
because pytorch did basically deprecate caffe2
tinco
@tinco:matrix.org
[m]
ahh alright, thanks for catching me up haha :D
Emmanuel Benazera
@beniz
Also, very don't see may semantic segmentation models with our customers. I believe this is due to labeling costs. We like to automate the labeling steps ;)
tinco
@tinco:matrix.org
[m]
we're paying students haha, and our customers are actually paying for the labeling already, we're trying to optimize the process
Emmanuel Benazera
@beniz
sure
tinco
@tinco:matrix.org
[m]
so what's your business model, do you sell consultancy around deepdetect, or is it a tool you use to implement machine learning at your clients?
Emmanuel Benazera
@beniz
yes, we are mostly a service company, we serve large corps mostly on complex problems, when there's no product on the shelves, or when there's not much litterature on whether a problem can be solved with ML/DL/RL. DD is the tool that embeds everything we have solved, and that goes into production.
tinco
@tinco:matrix.org
[m]
very cool, thanks for sharing!
Emmanuel Benazera
@beniz
no worries, we've got several requests for yolo models recently, so they might make it into the framework soon.
As for semantic seg, the path for us will be through our torch C++ backend, here again depending on requests and usage.
you can open issues on github for feature requests
tinco
@tinco:matrix.org
[m]
hey, so I just noticed that yolov5 is in pytorch, does that mean the model could just be dropped through a model repository in deep detect, or will there be some code needed as well?
Emmanuel Benazera
@beniz
almost... we've looked at it recently, and the ultralytics repo has code that makes it a bit more tricky, typically there's a bbox filtering step that they did put outside the model, which is weird, and that would need to recoded, that's for the detail.
we've got the request several times now, so we'll try to have an answer to yolov5 :)
tinco
@tinco:matrix.org
[m]
there's no python in deepdetect at all is there? if there was it would be a cool feature to have python based plugins that you could use to preprocess/post process data and add support for little things like that, though of course that's a never ending story with native extensions and such
Emmanuel Benazera
@beniz
it's full C++ yes, there's a python client.
Ananya Chaturvedi
@ananyachat

Hi, I am having trouble in following the instructions on the quickstart page of deep detect. I am using the option "build from source (Ubuntu 18.04 TLS)".

At the step with cmake command after moving to the folder /deepdetect/build, I am getting an error that "Building with tensorflow AND torch can't be build together". I am getting this error no matter what backend option I choose.

P.S.: I have macbook, so in order to use linux on my laptop, I am using a virtual linux instance created by my company for me.
Can someone please help me with this?
Screenshot 2021-05-05 at 2.44.15 PM.png
above is the screenshot of the error message I am getting
Emmanuel Benazera
@beniz
Hi, can you share your exact cmake call and the log that follows please ?
Ananya Chaturvedi
@ananyachat

Hi, this is the cmake call which gave the above error message:

cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DUSE_TF=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF

I also tried with the GPU computation:

cmake .. -DUSE_SIMSEARCH=ON -DUSE_CUDNN=ON -DUSE_TF=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF

Still getting the same error message.

Emmanuel Benazera
@beniz
I've just tested it and this line works fine for me. However, we discourage using our Tensorflow build. It's basically deprecated as everything has transitioned to Pytorch C++.
Ananya Chaturvedi
@ananyachat
I tried that too
Emmanuel Benazera
@beniz
then your problem is elsewhere
what's your cmake version ? cmake --version
Ananya Chaturvedi
@ananyachat
it is 3.14
do you think it could be because I using a linux instance on a macbook instead of an actual linux os?
Louis Jean
@Bycob
USE_TORCH, USE_TF etc are persistent between cmake calls. Did you try to switch them on the same command line, e.g
cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DUSE_TF=OFF