mergify[bot] on master
fix(trace_yolox): bbox shifted … (compare)
mergify[bot] on master
fix(torch): data augmentation h… (compare)
"IVF262144_HNSW32,PQ64"
maybe too brutal
@beniz going back to what's in the deepdetect docs.... It's segfaulting...
I index with..
'output': {'index': True, "ondisk": True, "index_type": "IVF20,SQ8", "train_samples": 100, "nprobe": 64 }
Index looks good.. 6000 images
180K model/index.faiss
31M model/index_mmap.faiss
curl -X PUT "http://localhost:8080/services/test" -d '{
"mllib":"caffe",
"description":"similarity search service",
"type":"unsupervised",
"parameters":{
"input":{
"connector":"image",
"height": 224,
"width": 224
},
"mllib":{
"nclasses":20
}
},
"model":{
"templates":"../templates/caffe/",
"repository":"/var/www/xxx/web/files-tshirt/trainer/simsearch/model/",
"weight": "model_iter_13500.caffemodel"
}
}'
$ curl -X POST "http://localhost:8080/predict" -d '{
> "service":"test",
> "parameters":{
> "input":{ "height": 224, "width": 224 },
> "output":{ "search_nn": 10, "search": true },
> "mllib":{ "extract_layer":"pool5/7x7_s1" }
> },
> "data":["https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"] }'
curl: (52) Empty reply from server
[2021-11-14 21:29:29.883] [test] [info] Using pre-trained weights from /var/www/xxx/web/files-tshirt/trainer/simsearch/model/model_iter_13500.caffemodel
[2021-11-14 21:29:30.211] [torchlib] [info] Attempting to upgrade batch norm layers using deprecated params: /var/www/xxx/web/files-tshirt/trainer/simsearch/model/model_iter_13500.caffemodel
[2021-11-14 21:29:30.211] [torchlib] [info] Successfully upgraded batch norm layers using deprecated params.
[2021-11-14 21:29:30.315] [test] [info] Net total flops=3858534272 / total params=26063936
[2021-11-14 21:29:30.315] [test] [info] detected network type is classification
[2021-11-14 21:29:30.315] [api] [info] HTTP/1.1 "PUT /services/test" <n/a> 201 551ms
open existing index db
[2021-11-14 21:30:08.347] [torchlib] [info] Opened lmdb /var/www/xxx/web/files-tshirt/trainer/simsearch/model//names.bin
bash: line 1: 7 Segmentation fault (core dumped) ./dede -host 0.0.0.0
jolibrain_cpu
GIT REF: heads/v0.19.0:1673a99ecc922e01dd7cc8845098291ef46a8902
COMPILE_FLAGS: USE_CAFFE2=OFF USE_TF=OFF USE_NCNN=ON USE_TORCH=OFF USE_HDF5=ON USE_CAFFE=ON USE_TENSORRT=OFF USE_TENSORRT_OSS=OFF USE_DLIB=OFF USE_CUDA_CV=OFF USE_SIMSEARCH=ON USE_ANNOY=OFF USE_FAISS=ON USE_COMMAND_LINE=ON USE_JSON_API=ON USE_HTTP_SERVER=OFF
DEPS_VERSION: OPENCV_VERSION=4.2.0 CUDA_VERSION= CUDNN_VERSION= TENSORRT_VERSION=
"train_samples": 10000,
be the TOTAL size of all of your images you expect to train in the set? or just a nice localised number for where it will compare against?
Service create
"description":"generic image detection service",
"model":{
"repository":"/images/models/packages_detc",
"templates":"../templates/caffe/",
"weight": "SE-ResNet-50.caffemodel"
},
"mllib":"caffe",
"type":"supervised",
"parameters":{
"input":{
"connector":"image",
"width":224,
"height":224,
"db": true,
"bbox": true
},
"mllib":{
"template":"resnet_50",
"nclasses":3,
"finetuning":true
}
},
}
"service":"packages_detc",
"async":true,
"parameters":{
"input":{
"connector":"image",
"test_split":0.1,
"shuffle":true,
"height":224,
"width":224,
"db":true,
"rgb":true,
},
"mllib":{
"gpu":false,
"mirror":true,
"net":{
"batch_size":3,
"test_batch_size":3
},
"solver":{
"test_interval":500,
"iterations":1000,
"base_lr":0.001,
},
"noise":{"all_effects":true, "prob":0.001},
"distort":{"all_effects":true, "prob":0.01},
"bbox": true
},
"output":{
"measure":["acc","mcll","f1"],
}
},
"data":["/images/train/"]
}
Getting the error " auto batch size set to zero:" but i dont get it where it is set to zero
ssd
templates and get a dataset in proper format, see https://www.deepdetect.com/platform/docs/object-detection/ it has the format description