I Still dont know where to define classes, I made a training where the image_index.txt files include 1 as label. I want to call 1 packages.
The training when through although the prediction response transmitts "cat": 1 back as class all the time. and the bboxes are way off, with always strating ymin: 0.0 and with a box width of 30 pixels. while my whole image is 2560 pixel. So one package is never 30pix wide.
Any idea where these problems might arrise?
I certainly used too few pictures, with 125 training images and 15 test images.
I checked the image_index.txt file and the bboxes are correct though.
I am using the SSD 300 reduced: VGG_ILSVRC_16_layers_fc_reduced.caffemodel
Are there like empty models which i can train from scratch does that bring better results?
Service Create:
"description":"generic image detection service",
"model":{
"repository":"/images/models/packages_detc",
"templates":"../templates/caffe/",
"weight": "VGG_ILSVRC_16_layers_fc_reduced.caffemodel"
},
"mllib":"caffe",
"type":"supervised",
"parameters":{
"input":{
"connector":"image",
"width":300,
"height":300,
"db": true,
"bbox": true
},
"mllib":{
"template":"ssd_300",
"nclasses":2,
"finetuning":true
}
},
}
Training:
"service":"packages_detc",
"async":true,
"parameters":{
"input":{
"connector":"image",
"test_split":0.1,
"shuffle":true,
"height":300,
"width":300,
"db":true,
"rgb":true,
},
"mllib":{
"gpu":false,
"mirror":true,
"net":{
"batch_size":5,
"test_batch_size":5
},
"solver":{
"test_interval":250,
"iterations":500,
"base_lr":0.01,
"solver_type": "RMSPROP"
},
"noise":{"all_effects":true, "prob":0.001},
"distort":{"all_effects":true, "prob":0.01},
"bbox": true
},
"output":{
"measure":["map"],
}
},
"data":["/images/train/bboxes/train.txt", "/images/test/bboxes/test.txt"]
}
test_split
since you are passing train.txt and test.txt already
weights
from this model instead of your current .caffemodel : https://www.deepdetect.com/models/detection_600/
prob
of the distort
object to 0.5
rgb
to false
Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:
{
"description":"generic image detection service",
"model":{
"repository":"/images/models/packages_detc",
"create_repository": true,
"templates":"../templates/caffe/",
"weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"
},
"mllib":"caffe",
"type":"supervised",
"parameters":{
"input":{
"connector":"image",
"width":300,
"height":300,
"db": true,
"bbox": true
},
"mllib":{
"template":"vgg_16",
"nclasses":602,
"finetuning":true
}
},
}
Start training:
{
"service":"packages_detc",
"async":true,
"parameters":{
"input":{
"connector":"image",
"shuffle":true,
"height":300,
"width":300,
"db":true,
"rgb":false,
},
"mllib":{
"gpu":false,
"mirror":true,
"net":{
"batch_size":5,
"test_batch_size":5
},
"solver":{
"test_interval":250,
"iterations":1000,
"solver_type": "ADAM"
},
"noise":{"all_effects":true, "prob":0.001},
"distort":{"all_effects":true, "prob":0.5},
"bbox": true
},
"output":{
"measure":["map"],
}
},
"data":["/images/train/train.txt", "/images/train/train.txt"]
}
refinedet_512
and download https://www.deepdetect.com/models/pretrained/refinedet_512/VOC0712_refinedet_vgg16_512x512_iter_120000.caffemodel to use them as weights
batch_size
and iter_size
depending on your GPU. You can set iter_size
to 4 to compensate. You'll need many more than 1000 iterations also, not sure what your data are, but 25000 is reasonable for a first run/try.
My image_32.txt files are build like:
<label> <xmin> <ymin> <xmax> <ymax>
601 22 32 673 756
601 because detection 600 had 600 classes in it and i wanted a new class
train.txt lines are :
[docker volume folder]/train/image_3.jpg- [docker volume folder]/train/image_3.txt
What do you mean wrong format?
hey @beniz - happy new year! long time since I've been looking through this stuff - tons of updates!! was just wondering if you're still building docker images with TF included. I'm trying to update our build process and I'm running into quite a number of issues with changes in DD affecting the build as well as changes in floopz/tensorflow_cc (and tensorflow itself).
Anyways, was just wondering if a (even CPU-only, for now) automated build is still happening with tensorflow that I can compare my build process too. Thanks in advance!