Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 26 09:41

    beniz on master

    fix(TRT): fix build wrt new ext… (compare)

  • Jul 26 09:41
    beniz closed #1321
  • Jul 26 09:24
    beniz reopened #1321
  • Jul 26 09:08
    fantes closed #1321
  • Jul 26 09:08
    mergify[bot] review_requested #1321
  • Jul 26 09:08
    mergify[bot] review_requested #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes labeled #1321
  • Jul 26 09:07
    fantes opened #1321
  • Jul 21 12:17
    rguilmont commented #1316
  • Jul 21 11:50

    mergify[bot] on master

    fix: always depend on oatpp Be… (compare)

  • Jul 21 11:50
    mergify[bot] closed #1320
  • Jul 21 11:50
    mergify[bot] labeled #1320
  • Jul 21 10:07
    mergify[bot] unlabeled #1320
  • Jul 21 10:07
    mergify[bot] synchronize #1320
  • Jul 21 10:07
    mergify[bot] labeled #1320
  • Jul 20 12:35
    lganzzzo commented #1316
  • Jul 20 11:19

    mergify[bot] on master

    fix(torch): predictions handled… (compare)

  • Jul 20 11:19
    mergify[bot] closed #1319
Emmanuel Benazera
@beniz
yes, we are mostly a service company, we serve large corps mostly on complex problems, when there's no product on the shelves, or when there's not much litterature on whether a problem can be solved with ML/DL/RL. DD is the tool that embeds everything we have solved, and that goes into production.
tinco
@tinco:matrix.org
[m]
very cool, thanks for sharing!
Emmanuel Benazera
@beniz
no worries, we've got several requests for yolo models recently, so they might make it into the framework soon.
As for semantic seg, the path for us will be through our torch C++ backend, here again depending on requests and usage.
you can open issues on github for feature requests
tinco
@tinco:matrix.org
[m]
hey, so I just noticed that yolov5 is in pytorch, does that mean the model could just be dropped through a model repository in deep detect, or will there be some code needed as well?
Emmanuel Benazera
@beniz
almost... we've looked at it recently, and the ultralytics repo has code that makes it a bit more tricky, typically there's a bbox filtering step that they did put outside the model, which is weird, and that would need to recoded, that's for the detail.
we've got the request several times now, so we'll try to have an answer to yolov5 :)
tinco
@tinco:matrix.org
[m]
there's no python in deepdetect at all is there? if there was it would be a cool feature to have python based plugins that you could use to preprocess/post process data and add support for little things like that, though of course that's a never ending story with native extensions and such
Emmanuel Benazera
@beniz
it's full C++ yes, there's a python client.
Ananya Chaturvedi
@ananyachat

Hi, I am having trouble in following the instructions on the quickstart page of deep detect. I am using the option "build from source (Ubuntu 18.04 TLS)".

At the step with cmake command after moving to the folder /deepdetect/build, I am getting an error that "Building with tensorflow AND torch can't be build together". I am getting this error no matter what backend option I choose.

P.S.: I have macbook, so in order to use linux on my laptop, I am using a virtual linux instance created by my company for me.
Can someone please help me with this?
Screenshot 2021-05-05 at 2.44.15 PM.png
above is the screenshot of the error message I am getting
Emmanuel Benazera
@beniz
Hi, can you share your exact cmake call and the log that follows please ?
Ananya Chaturvedi
@ananyachat

Hi, this is the cmake call which gave the above error message:

cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DUSE_TF=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF

I also tried with the GPU computation:

cmake .. -DUSE_SIMSEARCH=ON -DUSE_CUDNN=ON -DUSE_TF=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF

Still getting the same error message.

Emmanuel Benazera
@beniz
I've just tested it and this line works fine for me. However, we discourage using our Tensorflow build. It's basically deprecated as everything has transitioned to Pytorch C++.
Ananya Chaturvedi
@ananyachat
I tried that too
Emmanuel Benazera
@beniz
then your problem is elsewhere
what's your cmake version ? cmake --version
Ananya Chaturvedi
@ananyachat
it is 3.14
do you think it could be because I using a linux instance on a macbook instead of an actual linux os?
Louis Jean
@Bycob
USE_TORCH, USE_TF etc are persistent between cmake calls. Did you try to switch them on the same command line, e.g
cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DUSE_TF=OFF
Emmanuel Benazera
@beniz
or rm -rf of the build dir yes...
Louis Jean
@Bycob
Normally cmake displays a list of all the options, but it may not be the case if there is an error during configure step...
Something like this
-- RELEASE:               OFF
-- WARNING:               ON
-- USE_COMMAND_LINE:      ON
-- USE_JSON_API:          ON
-- USE_HTTP_SERVER:       ON
-- USE_HTTP_SERVER_OATPP: ON
-- BUILD_SPDLOG:          ON
-- BUILD_PROTOBUF:        ON
-- BUILD_TESTS:           ON
-- BUILD_TOOLS:           OFF
-- USE_CAFFE:             OFF
-- USE_CAFFE_CPU_ONLY:    OFF
-- USE_CAFFE_DEBUG:       OFF
-- USE_CAFFE2:            OFF
-- USE_CAFFE2_CPU_ONLY:   OFF
-- USE_TORCH:             ON
-- USE_TORCH_CPU_ONLY:    OFF
-- USE_SIMSEARCH:         OFF
-- USE_TF:                OFF
-- USE_TF_CPU_ONLY:       OFF
-- USE_NCNN:              OFF
-- USE_HDF5:              ON
-- USE_TENSORRT:          OFF
-- USE_DLIB:              OFF
-- USE_DLIB_CPU_ONLY:     OFF
-- USE_ANNOY:             OFF
-- USE_FAISS:             ON
-- USE_FAISS_CPU_ONLY:    OFF
-- USE_CUDNN:             OFF
-- USE_XGBOOST:           OFF
-- USE_XGBOOST_CPU_ONLY:  ON
-- USE_TSNE:              OFF
-- USE_BOOST_BACKTRACE:   ON
Ananya Chaturvedi
@ananyachat

USE_TORCH, USE_TF etc are persistent between cmake calls. Did you try to switch them on the same command line, e.g
cmake .. -DUSE_SIMSEARCH=ON -DUSE_CPU_ONLY=ON -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DUSE_TF=OFF

Okay this suggestion by @BynaryCobweb gave a different error message that "Could NOT find CUDNN (missing: CUDNN_LIBRARY CUDNN_INCLUDE_DIR)". So may be all the problem is there because I have not installed cudnn.

I'll try doing that
thanks
Emmanuel Benazera
@beniz
you don't need CUDNN if you have no GPU
For a CPU only build, this should work fine : cmake .. -DUSE_CAFFE=OFF -DUSE_TORCH=ON -DBUILD_TESTS=OFF -DUSE_CPU_ONLY=ON -DUSE_HTTP_SERVER_OATPP=ON -DUSE_HTTP_SERVER=OFF -DBUILD_SPDLOG=ON
you may want to completely wipe your build/ dir and start a build from scratch.
Ananya Chaturvedi
@ananyachat
oh okay
I'll try to start a build from scratch
thanks
It worked!
thanks a ton
Emmanuel Benazera
@beniz
you're welcome
Romain Guilmont
@rguilmont
Hello guys ! hope you're doing well :) Bad news : the memory leak we talked about weeks ago is still there... But good news, with @YaYaB we identified exactly the cause, here you have it :) jolibrain/deepdetect#1316
Emmanuel Benazera
@beniz
Hi @rguilmont sure thanks, someone will look into the gzip deflate. Maybe the code has changed while integrating the new oatpp layer.
This is vacation time, so this may take some time.
Romain Guilmont
@rguilmont
Enjoy your vacations ;)