Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 29 18:31
    mergify[bot] unlabeled #1446
  • Jun 29 18:31
    mergify[bot] closed #1446
  • Jun 29 18:31

    mergify[bot] on master

    chore: update to oatpp 1.3 (compare)

  • Jun 29 18:31
    mergify[bot] labeled #1446
  • Jun 29 15:45
    Bycob synchronize #1446
  • Jun 29 10:35
    Bycob synchronize #1446
  • Jun 28 15:54
    Bycob synchronize #1446
  • Jun 27 11:49
    Bycob labeled #1446
  • Jun 27 08:57
    Bycob synchronize #1446
  • Jun 27 07:54
    beniz commented #1446
  • Jun 25 09:18
    chichaj unassigned #547
  • Jun 25 09:18
    chichaj unassigned #443
  • Jun 25 09:18
    chichaj unassigned #447
  • Jun 24 16:00
    mergify[bot] review_requested #1446
  • Jun 24 16:00
    mergify[bot] review_requested #1446
  • Jun 24 15:59
    Bycob labeled #1446
  • Jun 24 15:59
    Bycob labeled #1446
  • Jun 24 15:59
    Bycob opened #1446
  • Jun 22 15:41
    Bycob edited #1437
  • Jun 22 10:44
    mergify[bot] unlabeled #1437
Emmanuel Benazera
@beniz
Protobuf+http is the easiest alternative yet. We are changing the http server to oatpp, and this will open the way to protobuf+http. OATPP is actually already built into DD but we moved it back to secondary because of a very strange issue with tbb intel library that is used by OpenCV. Very annoying rare bug that is slowing our migration down and thus a series of improvements. That's the game I guess...
cchadowitz-pf
@cchadowitz-pf
oof. what was the issue between tbb and oatpp? i noticed the 'rollback'. i might start putting together some microservices at some point and am open to suggestions on any/all c++ libs that could be useful for this :)
Emmanuel Benazera
@beniz
We have traced it back to tbb, a detached thread id just vanishes, we're trying various things now. I believe it's proven an OpenCV build with another thread lib does not lead to the crash. Very annoying issue. It seems tbb is bound to get in the C++ standard as an implementation, so we need to find either a fix, either a workaround. OATPP is very well made and its dev are very attentive to every detail. Great lib from great people!
cchadowitz-pf
@cchadowitz-pf
:+1: :+1:
Bhavik samcom
@Bhavik_samcom_gitlab
How to train own csv file for implement the cover service?
So, for that i have to train my own csv file.How can i do?

I gone this link,
https://www.deepdetect.com/platform/docs/training-from-csv-data/

But still not cleared to me

Emmanuel Benazera
@beniz
Hi @Bhavik_samcom_gitlab get the cover_service to work then once you understand it, you can use your own CSV and adapt the API calls.
if you are using the DeepDetect server, the doc for CSV is https://www.deepdetect.com/server/docs/csv-training/
Bhavik samcom
@Bhavik_samcom_gitlab
@beniz , this service is working propperly
https://www.deepdetect.com/server/docs/csv-training/
@beniz In the above link it has train.csv file has been trained but want my own dataset from my csv file, so for that i have to train my own csv file
Emmanuel Benazera
@beniz
sure, then use your own file and modifies the API parameters.
YaYaB
@YaYaB
Hey guys, I tried to build an image of the new version of DD v0.12.0 on a good machine. It took around 280 minutes. It sounds a lot to me, do you experience the same on your side?
Emmanuel Benazera
@beniz
hello, make sure you don't build with torch, otherwise it's not too long usually.
YaYaB
@YaYaB
ok do you have an estimated time? It seems that faiss takes some time too.
indeed the 280 minutes build used torch as backend. If do not build torch, only trt + caffe I get 144 minutes (quite long :s)
Emmanuel Benazera
@beniz
1h5mins with torch + ncnn+ trt + xgboost + caffe + faiss -> with torch cached
change your CPU :)
we have cached versions of the torch build
you can tarball the pre-built torch as needed, same for caffe
YaYaB
@YaYaB
Haha I'll ask for more ressources :p
On my side it seems to be stuck quite a long time on faiss with the following errors
ptxas /tmp/tmpxft_00006a63_00000000-12_PQScanMultiPassNoPrecomputed.compute_50.ptx, line 111; warning : ld
...
...
ptxas /tmp/tmpxft_0000753a_00000000-5_PQScanMultiPassPrecomputed.compute_75.ptx, line 11479; warning : ld
Emmanuel Benazera
@beniz
you are probably building for too many architectures
they get detected by cmake, and you can also force them by hand for caffe with CUDA_ARCH, not sure about pytorch builds, I can ask those who deal with it.
YaYaB
@YaYaB
Yeah I thought they were detected by cmake but it does not seem to be the case here
Emmanuel Benazera
@beniz
they are not passed to torch, so torch builds for multiple architectures
there'll be an internal card for passing native gpu arch
YaYaB
@YaYaB
Ok good know, I'll try setting the CUDA_ARCH for caffe to see if that reduces the build time
cchadowitz-pf
@cchadowitz-pf

hey @beniz - continuing to experiment with NCNN, I was trying to convert the word_detect_v2, crop action, multiword_ocr chain from Caffe to NCNN and ran into an internal error:

[2021-01-15 22:07:29.135] [ocr-d21cfe6c-2e8c-40c7-94df-b4740bcfb44a-0] [info] number of calls=3
[2021-01-15 22:07:29.135] [ocr-d21cfe6c-2e8c-40c7-94df-b4740bcfb44a-0] [info] [0] / executing predict on service word_detect_v2_ncnn
[2021-01-15 22:07:35.575] [ocr-d21cfe6c-2e8c-40c7-94df-b4740bcfb44a-0] [info] [1] / executing action crop
[2021-01-15 22:07:35.576] [api] [error] 10.10.10.32 "PUT /chain/ocr-d21cfe6c-2e8c-40c7-94df-b4740bcfb44a-0" 500 6441ms

and the returned error was:

{
  "status": {
    "code": 500,
    "dd_code": 1007,
    "dd_msg": "in get<T>()",
    "msg": "InternalError"
  }
}

Any idea what's going on?

Emmanuel Benazera
@beniz
@cchadowitz-pf Hi, best if you can provide the full API call and image for reproducing. This looks like either an API parameter with wrong type or an internal value to the chain with wrong type.
Emmanuel Benazera
@beniz
@cchadowitz-pf OK thanks for the report, got it, see PR #1137. There should be a 0.12.1 release next wed anyways as we fixed a few things.
However, there's a remaining issue with NCNN since it only supports batch size of 1, and thus on chains, downstream NCNN models can only process the first sample.
I'll add larger batch size support for NCNN, but their doc shows how to do this by simply using omp to parallelize a for loop, meaning it's parallel on the CPU, not aggregated into batches at GPU level.
Nevertheless, since NCNN is mostly supposed to be used on CPU, this should not harm too much. Maybe I'll force the number of threads to the number of local CPU cores.
Emmanuel Benazera
@beniz
OK, so there's now PR 1138 that adds support for batches to NCNN with image models. Chains do appear to work correctly for me. We're lacking proper tests on chains, but they should make it in soon.
cchadowitz-pf
@cchadowitz-pf
oh fantastic! (sorry for the delay, i was away for a bit)
I wasn't aware that NCNN was even utilizing the GPU in DeepDetect since it relies on vulkan I thought?
Emmanuel Benazera
@beniz
yes, it's vulkan based, I use it with the Apple silicon M1 chip at the moment
cchadowitz-pf
@cchadowitz-pf
:+1: i didn't realize DeepDetect supported NCNN on gpu already though, cool!
Emmanuel Benazera
@beniz
that was my xmas project yes, we may port it to NVidia GPUs, but at least the Vulkan part is there (still PR)
cchadowitz-pf
@cchadowitz-pf
:+1: it doesn't natively support CUDA, right?
Emmanuel Benazera
@beniz
not that I know
but nvidia gpus via vulkan I believe
dgtlmoon
@dgtlmoon
./dede: error while loading shared libraries: libprotobuf.so.3.11.4.0: cannot open shared object file: No such file or directory from jolibrain/deepdetect_cpu
not sure if it's me just yet
Emmanuel Benazera
@beniz
docker?
dgtlmoon
@dgtlmoon
dd@afe456179096:/opt/deepdetect/build/main$ ./dede
./dede: error while loading shared libraries: libprotobuf.so.3.11.4.0: cannot open shared object file: No such file or directory
in the dede docker is
dd@afe456179096:/opt/deepdetect/build/main$ dpkg -l|grep libprotobuf
ii  libprotobuf10:amd64             3.0.0-9.1ubuntu1                    amd64        protocol buffers C++ library

dd@afe456179096:/opt/deepdetect/build/main$ cat /var/lib/dpkg/info/libprotobuf10\:amd64.list 
/usr/lib/x86_64-linux-gnu/libprotobuf.so.10.0.0
dgtlmoon
@dgtlmoon
dd@afe456179096:/opt/deepdetect/build/main$ find /|grep libprotobuf            
/opt/deepdetect/build/lib/libprotobuf.so
/opt/deepdetect/build/lib/libprotobuf.so.3.11.4.0
/opt/deepdetect/build/lib/libprotobuf-lite.so
/opt/deepdetect/build/lib/libprotobuf-lite.so.3.11.4.0
your ldpath config is broken perhaps
Emmanuel Benazera
@beniz
hi @dgtlmoon protobuf is built internally to avoid conflicts, what docker are you using ?
dgtlmoon
@dgtlmoon
good morning :) I'm using the current jolibrain/deepdetect_cpu, just ran a docker-compose pull
dd@afe456179096:/opt/deepdetect/build/main$ ldconfig -v 2>/dev/null | grep -v ^$'\t'
/usr/local/lib:
/lib/x86_64-linux-gnu:
/usr/lib/x86_64-linux-gnu:
/lib:
/usr/lib:
I'm using my own start-up script by the way.. ahhh