mergify[bot] on master
fix: prevent a buggy optimizati… (compare)
weights
from this model instead of your current .caffemodel : https://www.deepdetect.com/models/detection_600/
prob
of the distort
object to 0.5
rgb
to false
Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:
{
"description":"generic image detection service",
"model":{
"repository":"/images/models/packages_detc",
"create_repository": true,
"templates":"../templates/caffe/",
"weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"
},
"mllib":"caffe",
"type":"supervised",
"parameters":{
"input":{
"connector":"image",
"width":300,
"height":300,
"db": true,
"bbox": true
},
"mllib":{
"template":"vgg_16",
"nclasses":602,
"finetuning":true
}
},
}
Start training:
{
"service":"packages_detc",
"async":true,
"parameters":{
"input":{
"connector":"image",
"shuffle":true,
"height":300,
"width":300,
"db":true,
"rgb":false,
},
"mllib":{
"gpu":false,
"mirror":true,
"net":{
"batch_size":5,
"test_batch_size":5
},
"solver":{
"test_interval":250,
"iterations":1000,
"solver_type": "ADAM"
},
"noise":{"all_effects":true, "prob":0.001},
"distort":{"all_effects":true, "prob":0.5},
"bbox": true
},
"output":{
"measure":["map"],
}
},
"data":["/images/train/train.txt", "/images/train/train.txt"]
}
refinedet_512
and download https://www.deepdetect.com/models/pretrained/refinedet_512/VOC0712_refinedet_vgg16_512x512_iter_120000.caffemodel to use them as weights
batch_size
and iter_size
depending on your GPU. You can set iter_size
to 4 to compensate. You'll need many more than 1000 iterations also, not sure what your data are, but 25000 is reasonable for a first run/try.
My image_32.txt files are build like:
<label> <xmin> <ymin> <xmax> <ymax>
601 22 32 673 756
601 because detection 600 had 600 classes in it and i wanted a new class
train.txt lines are :
[docker volume folder]/train/image_3.jpg- [docker volume folder]/train/image_3.txt
What do you mean wrong format?
hey @beniz - happy new year! long time since I've been looking through this stuff - tons of updates!! was just wondering if you're still building docker images with TF included. I'm trying to update our build process and I'm running into quite a number of issues with changes in DD affecting the build as well as changes in floopz/tensorflow_cc (and tensorflow itself).
Anyways, was just wondering if a (even CPU-only, for now) automated build is still happening with tensorflow that I can compare my build process too. Thanks in advance!