Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Mar 31 19:11
    fantes opened #719
  • Mar 31 17:25

    beniz on master

    add unit test to tensorrt Merge pull request #697 from fa… (compare)

  • Mar 31 17:25
    beniz closed #697
  • Mar 31 16:57
    YaYaB commented #718
  • Mar 31 16:56
    YaYaB commented #718
  • Mar 31 16:42
    fantes commented #718
  • Mar 31 16:41
    fantes commented #718
  • Mar 31 16:32
    fantes commented #718
  • Mar 31 16:27
    beniz commented #718
  • Mar 31 16:23
    beniz commented #718
  • Mar 31 16:16
    YaYaB edited #718
  • Mar 31 16:16
    YaYaB opened #718
  • Mar 31 14:53
    quadeare synchronize #706
  • Mar 31 14:51
    quadeare synchronize #706
  • Mar 31 14:34
    quadeare synchronize #706
  • Mar 31 13:53
    quadeare synchronize #706
  • Mar 31 13:47
    quadeare synchronize #706
  • Mar 31 12:29
    beniz assigned #712
  • Mar 31 12:27
    beniz review_requested #717
  • Mar 31 12:27
    beniz labeled #717
jassem123
@jassem123
Hello again @beniz , I found the documentation tab related to jupyter notebook lacking of any procedure to add services from a notebook , is there any other useful link ? thanks
Emmanuel Benazera
@beniz
Are you using the DD platform or DD server ?
jassem123
@jassem123
what do you recommend for a small environment such as mine ?
Emmanuel Benazera
@beniz
DD server
you can use the Python client to create services
jassem123
@jassem123
Yes the dd container is working
the arch is cpu_tf ..
jassem123
@jassem123
https://github.com/jolibrain/dd_widgets Are you referring to this ?
@beniz
(not sure what @mention does , hope it notifies you man haha )
jassem123
@jassem123
Hello again sir
can I load OpenVino pretrained public models to DD and serve them ?
here's the link above *
Emmanuel Benazera
@beniz
Hi, some of them possibly, with modification of the .prototxt files for the caffe models
you may want to use openvino directly though
jassem123
@jassem123
Thanks for the feedback
Is there any pretrained model for video classification that I can implement for a quick demo ?
jassem123
@jassem123
If not , how can I proceed in this step ?
Emmanuel Benazera
@beniz
jassem123
@jassem123
let's say I have a video classification model on a jupyter notebook , can it be interpreted on DD's platform ? the example on the /platform site talks about image segmentation
Emmanuel Benazera
@beniz
it depends of your model format (tf, pytorch, ...) and inputs/outputs
Tete Cochete
@tetecohete_gitlab

Hi everyone,

I'm trying to train my own OCR Service based on my own set of images words, as the sample servicces words_mnist or the multiword_ocr.

But the predictions service return are either meansless or blank.

So I decided to startfrom scrach, and train sample service words_mnist with the set of works provided.

But the result is the same, predictions are either meansless or blank.

Here are the requests:

Service creation

curl -X PUT "http://localhost:8080/services/word_mnist" -d '
{
"mllib": "caffe",
"description": "word_mnist",
"type": "supervised",
"parameters": {
"input": {
"connector": "image",
"width": 128,
"height": 80,
"bw": false,
"db": true,
"ctc": true
},
"mllib": {
"template": "crnn",
"nclasses": 100,
"rotate": false,
"mirror": false,
"scale": 1.0,
"layers": [],
"db": true,
"noise": {
"all_effects": true,
"prob": 0.001
},
"distort": {
"all_effects": true,
"prob": 0.001
},
"gpu": true,
"gpuid": 1
},
"output": {}
},
"model": {
"templates": "../templates/caffe/",
"repository": "/opt/models/words_mnist"
}
}
'

Service prediction

curl -X POST "http://localhost:8080/train" '
{
"service": "word_mnist",
"async": true,
"parameters": {
"input": {
"test_split": 0.0,
"shuffle": true,
"db": true
},
"mllib": {
"gpu": true,
"net": {
"batch_size": 1,
"test_batch_size": 1
},
"solver": {
"test_initialization": false,
"iterations": 10000,
"test_interval": 1000,
"snapshot": 1000,
"base_lr": 0.0001,
"solver_type": "AMSGRAD",
"lookahead": true,
"rectified": true,
"iter_size": 1
},
"timesteps": 4
},
"output": {
"measure": [
"acc"
],
"target_repository": ""
}
},
"data": [
"/opt/samples/word_mnist/train.txt",
"/opt/samples/word_mnist/test.txt"
]
}
'

Someone now what I am doing wrong?

Guillaume Infantes
@fantes
@tetecohete_gitlab hi. here are small modifs i done to have your example work on my side:
there is a -d missing in the training request , it should read curl -X POST "http://localhost:8080/train" -d 'ALL_JSON_STUFF'
for some reason, batch_size : 1 and test_batch_size 1 do not work on my side , setting them to 128 and 32 respectively as in the doc works fine.
I hope this will help
Tete Cochete
@tetecohete_gitlab

I forgot to paste the "-d" on the chat, but my problem is other.

I got the trainnning running successful , but if i set the batch_size to 128 and test_batch_size 32 the dd server stop working.

The log server reports this problem (Segmentation fault) :
[words_mnist] [info] detected network type is ctc
Segmentation fault

My test enviroment is a docker dd over Windows ( 6 CPUs, 18 GB RAM )

I'm guessing that I'm runnnig out of resources, problably memory.

Emmanuel Benazera
@beniz
Yes, looks like memory is exhausted. Try lowering the batch sizes
Tete Cochete
@tetecohete_gitlab

I have lower it as much as I can, but only seems to complete all the iterantions if I set the batch size to one or maximum two.

Even setting the batch size to 1, the result of the predictions is meanless, the respond is either some strange characters or non at all.

As far as I understand, is this example, the batch size whould be the number of images to process every iteration. Therefore lowering the batch size would make the training process slower but the result should be the same.

That is the issue that i dont undenstant, lowering the batch size seems to work and completes the training, but the result of the predictions is meanless.

Evgeny Bazarov
@EBazarov
Hi @beniz hope you are doing well! Can you please advise. Is there's some simple way to collect some stats from DD on inference time, something like average response time, number of processed requests and etc.
Emmanuel Benazera
@beniz
Hello, for now you have to do it outside but it's on the roadmap to get it inside every service and served via the service info call. Can you collect the typical metrics of interest to you ? This could speed things up. We have some metrics on the list already, but the more information we collect, the better.
YaYaB
@YaYaB
Hey, I am having a trouble with the backend tensorrt to display all possible classes for a prediction using a classification model. If I do set "mllib.best" to something else than 1 or null, all the predictions will contain empty results. Thus, I am only able to get the best class and that is problematic on my side. Have you noticed this as well ?
Best
YaYaB.
Emmanuel Benazera
@beniz
Hello, no, but you can open an issue for reproducing.
YaYaB
@YaYaB
Ok I'll do that then. Thanks
JTorkhani
@JTorkhani
Hello everyone , I hope you are safe
I am running DD on docker
what is the path to the services , I need to persist them with a bind mounted routing
Emmanuel Benazera
@beniz
Hi, not sure about your question, maybe you can give more details. We use docker nginx in front of many dockers DD. The person most knowledgeable about this should be around this afternoon CET.
JTorkhani
@JTorkhani
Thank you for uour fast reply
the container won't persist a newly added service (i.e an image classifier)
if it's restarted
what is the folder that contains the created services
image.png
I took a tour inside the running container
I can't find any folder pointing at "services"
JTorkhani
@JTorkhani
You are awesome , I had the answer laying in front of me the whole morning
Thank you
YaYaB
@YaYaB
@beniz jolibrain/deepdetect#718 here is the issue about the distribution output