Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 20 14:48
    beniz synchronize #642
  • Sep 20 14:48

    beniz on opencv_gpu

    fixed interp in image input con… fixed typo with interp Merge branch 'img_interp_api' i… (compare)

  • Sep 20 13:37
    BynaryCobweb commented #637
  • Sep 20 13:37
    BynaryCobweb commented #637
  • Sep 20 13:37
    BynaryCobweb synchronize #637
  • Sep 20 13:23
    beniz synchronize #640
  • Sep 20 13:23

    beniz on img_interp_api

    fixed typo with interp (compare)

  • Sep 20 11:29
    beniz synchronize #642
  • Sep 20 11:29

    beniz on opencv_gpu

    GPU + OpenCV optimizations + su… (compare)

  • Sep 20 09:44
    fantes edited #636
  • Sep 20 09:44
    fantes synchronize #636
  • Sep 20 09:39
    beniz synchronize #636
  • Sep 20 09:34

    beniz on master

    fix caffe2ncnn conversion for d… Merge pull request #634 from fa… (compare)

  • Sep 20 09:34
    beniz closed #634
  • Sep 20 09:32

    beniz on master

    add cmake dependence so that on… Merge pull request #643 from fa… (compare)

  • Sep 20 09:32
    beniz closed #643
  • Sep 20 09:29
    beniz synchronize #640
  • Sep 20 09:29

    beniz on img_interp_api

    fixed interp in image input con… (compare)

  • Sep 19 12:35
    fantes opened #643
  • Sep 19 12:30
    fantes commented #616
Emmanuel Benazera
@beniz
Sure. It's because you are using it at server startup which is fine. Though the JSONApi class can be used from straight C++ for other things as well, without the HTTP server on top of it, so I guess it makes sense to not exit by default.
cchadowitz-pf
@cchadowitz-pf
Ohh I see. I thought the autostart was only ever just a command line startup parameter. didn't think about using it straight from C++ etc.
Emmanuel Benazera
@beniz
It's true that when starting ./dede --service-autostart, it'd make sense to fail start on any failure
cchadowitz-pf
@cchadowitz-pf
:+1: that's my use case :)
Emmanuel Benazera
@beniz
yeah, so I guess you are right, it could be default when using dede
as the http server I mean
cchadowitz-pf
@cchadowitz-pf
right
cchadowitz-pf
@cchadowitz-pf
yikes, I see what you mean about JDoc/rapidjson
thought I could do some simple JDoc compared to dd_create_201() or something
Emmanuel Benazera
@beniz
sure, it's not that difficult, just annoying
cchadowitz-pf
@cchadowitz-pf
opened a PR, happy to make tweaks if it's not in line with what you're looking for :)
Emmanuel Benazera
@beniz
all coming back from vacation, @cchadowitz-pf we'll have your PR pass the tests, then merge
cchadowitz-pf
@cchadowitz-pf
:+1: no worries, thanks!
rajeshreddy-T
@rajeshreddy-T
how should i call the local image file
Emmanuel Benazera
@beniz
Hi, please send API calls and error.
rajeshreddy-T
@rajeshreddy-T
curl -X POST 'http://localhost:8080/predict' -d '{
"service": "word_detect",
"parameters": {
"input": {},
"output": {
"confidence_threshold": 0,
"ctc": true,
"blank_label": 0
},
"mllib": {
"gpu": true
}
},
"data": [
"C://Desktop/example.jpg"
]
}'
Emmanuel Benazera
@beniz
You need to use unix paths, there's no support / builds for windows
Tell us if you are using docker
rajeshreddy-T
@rajeshreddy-T
ok
yes i am using docker
can it possible to call local image file using docker
Alexandre Girard
@alx
if you use only the dd server, with docker run command, you can add this option to the command line: -v c:/Users:/data
then place your file in C:/Users/your_file_path, and the path in the curl data parameter will be: /data/your_file_path
Hi @rajeshreddy-T , yes you can setup docker to use your local file, you just need to configure the volumes option in docker-compose.yml (are you using docker-compose?)
rajeshreddy-T
@rajeshreddy-T
no i am not using docker compose
docker run -d -p 8080:8080 jolibrain/deepdetect_cpu
I am just running above command
Alexandre Girard
@alx
so you can run: docker run -d -p 8080:8080 -v c:/Users:/data jolibrain/deepdetect_cpu
rajeshreddy-T
@rajeshreddy-T
ok
Alexandre Girard
@alx
then place your example.png file in c:/Users/
rajeshreddy-T
@rajeshreddy-T
ok I got it
Alexandre Girard
@alx
and call the following curl:
`curl -X POST 'http://localhost:8080/predict' -d '{
"service": "word_detect",
"parameters": {
"input": {},
"output": {
"confidence_threshold": 0,
"ctc": true,
"blank_label": 0
},
"mllib": {
"gpu": true
}
},
"data": [
"/data/example.jpg"
]
}'
rajeshreddy-T
@rajeshreddy-T
ok I installed in my server deep detect, how can I call local image file from my client-side
from another system
The DD client may post images through they base64 representation if those are not accessible to the deepdetect server
rajeshreddy-T
@rajeshreddy-T
ok I got it to thank Alex you saved my time
Alexandre Girard
@alx
you're welcome :)
rajeshreddy-T
@rajeshreddy-T
thank you so much
Jakkapong Saksrisuwan
@jsaksris
Hello,
I have got the jetson nano working with the livedetect but only achieving 2-3 fps.
How can we optimize this further.
Emmanuel Benazera
@beniz
Hi @jsaksris this very certainly means you are not using the GPU, or that you are using a large model
Jakkapong Saksrisuwan
@jsaksris
The models is VOC 21 classes using ncnn lib as given by the live detect example
I have verified that --gpu has been included in the call and dede was compiled with DUSE_TENSORRT.
I remember you've mentioned "something" unofficial release would that be something I could do?
Jakkapong Saksrisuwan
@jsaksris
sry the model is squeezenet_ssd_voc_ncnn_300x300
Emmanuel Benazera
@beniz
Jakkapong Saksrisuwan
@jsaksris
Screenshot from 2019-09-21 10-54-20.png
I've just copy-pasted from that. The only thing I can find is adding -DUSE_XGBOOST_GPU=ON
but was greeted with this.
Your assumption was correct the gpu were not used for prediction
Jakkapong Saksrisuwan
@jsaksris
-DUSE_CUDNN=ON is enabled.. and I thought this would be the one enable the GPU