Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 17:09
    mergify[bot] unlabeled #1469
  • 17:09

    mergify[bot] on master

    fix(torch): metrics naming for … (compare)

  • 17:09
    mergify[bot] closed #1469
  • 17:09
    mergify[bot] labeled #1469
  • 14:11
    beniz synchronize #1469
  • 14:11
    mergify[bot] commented #1469
  • 14:10
    Bycob commented #1469
  • 12:57
    Bycob commented #1469
  • Oct 03 12:42
    mergify[bot] unlabeled #1470
  • Oct 03 12:42

    mergify[bot] on master

    chore: ignore faiss warnings (compare)

  • Oct 03 12:42
    mergify[bot] closed #1470
  • Oct 03 12:42
    mergify[bot] labeled #1470
  • Oct 03 10:33
    Bycob edited #1470
  • Oct 03 10:15
    mergify[bot] review_requested #1470
  • Oct 03 10:15
    mergify[bot] review_requested #1470
  • Oct 03 10:15
    Bycob opened #1470
  • Oct 03 10:15
    Bycob labeled #1470
  • Oct 03 10:15
    Bycob labeled #1470
  • Sep 30 15:10
    mergify[bot] review_requested #1469
  • Sep 30 15:10
    mergify[bot] review_requested #1469
dgtlmoon
@dgtlmoon
max(ninvertedlist/50,2) what does invertedlist mean in this case?
tasibalint
@tasibalint
image.png
Anyone an idea what this could mean?
sry i t possible that class 1 train images are 36 and the other are 44 and i use a batch size of 5, and the test images are 6 for each classe i am gonna fix that first
Emmanuel Benazera
@beniz
@tasibalint this message means that the mean_valuefile is wrong somehow, not sure what you did exactly, mind sharing the API calls / steps you are using ?
Emmanuel Benazera
@beniz
or are your image b&w ?
tasibalint
@tasibalint
I have done the cats_dogs tutorial, and now out of desperation i started the cats_dogs training with my images and the training is running, soo apperantly the .cafemodel i was using wasn't compatible or something.
I was not using the model from:
"init": "https://deepdetect.com/models/init/desktop/images/classification/ilsvrc_googlenet.tar.gz",
But from :
https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
same name same size but different D: anyways i am training now :D
Emmanuel Benazera
@beniz
yeah, you need to use ours :)
tasibalint
@tasibalint
Hey people how do u train bbox detection model and where do u get the model from?
Emmanuel Benazera
@beniz
@tasibalint maybe too broad of a question... you can train via API or platform. What are you trying to achieve ?
tasibalint
@tasibalint
I want to train vai API, I have created with lableImg xml files with bounding boxes for each picture. But i dont understand how I tell the api to use those xml files

Service create

             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "templates":"../templates/caffe/",
                 "weight": "SE-ResNet-50.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":224,
                    "height":224,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                  "template":"resnet_50",
                  "nclasses":3,
                  "finetuning":true
                }
            },
        }
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "test_split":0.1,
                "shuffle":true,
                "height":224,
                "width":224,
                "db":true,
                "rgb":true,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":3,
                    "test_batch_size":3
                },
                "solver":{
                    "test_interval":500,
                    "iterations":1000,
                    "base_lr":0.001,
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.01},
                "bbox": true
            },
            "output":{
                 "measure":["acc","mcll","f1"],
            }
        },
        "data":["/images/train/"]
    }

Getting the error " auto batch size set to zero:" but i dont get it where it is set to zero

image.png
On the platform object detection guide i see this
is there a toll to convert xml files into what ever this txt is for a format, without doing it manually?
Emmanuel Benazera
@beniz
this is not a template for object detection, you'd need to use one of the ssd templates and get a dataset in proper format, see https://www.deepdetect.com/platform/docs/object-detection/ it has the format description
tasibalint
@tasibalint
thank you i am still unsure what the difference between mllib, model and templates are
Any ideas in how i can see what's in the:
"templates":"../templates/caffe/",
folder or if there are any other templates then caffe. is this where the docker is installed and if yes, is there a generic path where the docker is isntalled at?
tasibalint
@tasibalint
where do i define the classes for the ssd_300 model? i have <label> <xmin> <ymin> <xmax> <ymax> where label is a number. but how do i define that the number is a class, for classification models there is the corresp.txt, but for detection models?
Emmanuel Benazera
@beniz
same corresp.txt model if you can write it down and put it into the model directory. Only useful at inference though.
dgtlmoon
@dgtlmoon
@tasibalint should be in the tutorial there, i had no problems following it recently
but i'm just using an integer and keep my own map - in this way its worked for me
@beniz ever done some t-sne map visualisations of the FAISS index or similar?
Emmanuel Benazera
@beniz
Hi, no.
tasibalint
@tasibalint

I Still dont know where to define classes, I made a training where the image_index.txt files include 1 as label. I want to call 1 packages.
The training when through although the prediction response transmitts "cat": 1 back as class all the time. and the bboxes are way off, with always strating ymin: 0.0 and with a box width of 30 pixels. while my whole image is 2560 pixel. So one package is never 30pix wide.

Any idea where these problems might arrise?
I certainly used too few pictures, with 125 training images and 15 test images.
I checked the image_index.txt file and the bboxes are correct though.
I am using the SSD 300 reduced: VGG_ILSVRC_16_layers_fc_reduced.caffemodel
Are there like empty models which i can train from scratch does that bring better results?

Emmanuel Benazera
@beniz
Hi, if you post your API calls, it should be easy to fix them here
tasibalint
@tasibalint

Service Create:

             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "templates":"../templates/caffe/",
                 "weight": "VGG_ILSVRC_16_layers_fc_reduced.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                  "template":"ssd_300",
                  "nclasses":2,
                  "finetuning":true
                }
            },
        }

Training:

        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "test_split":0.1,
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":true,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5
                },
                "solver":{
                    "test_interval":250,
                    "iterations":500,
                    "base_lr":0.01,
                    "solver_type": "RMSPROP"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.01},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/bboxes/train.txt", "/images/test/bboxes/test.txt"]
    }
Emmanuel Benazera
@beniz
Thanks, hard to tell : are your bbox coordinates correct in the first place ? (I recommand to visualise them). We do this via the platform usually, and it's automated
Also, how many classes do you ahve ? 2 means background + a single class
Another thing, you don't need test_split since you are passing train.txt and test.txt already
tasibalint
@tasibalint
image.png
thanks for the information, the coordinates should be fine i tested some of them. But how do u test them on the platform? I couldn't find this ui from the guide
Emmanuel Benazera
@beniz
Hello @tasibalint , this requires installing the platform. If your bbox are fine, stay with your scripts, and try using the weights from this model instead of your current .caffemodel : https://www.deepdetect.com/models/detection_600/
also, put the prob of the distort object to 0.5
and put rgb to false
tasibalint
@tasibalint

Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:

{ 
             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "create_repository": true,
                 "templates":"../templates/caffe/",
                 "weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                    "template":"vgg_16",
                    "nclasses":602,
                    "finetuning":true
                }
            },
        }

Start training:

{ 
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":false,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5

                },
                "solver":{
                    "test_interval":250,
                    "iterations":1000,
                    "solver_type": "ADAM"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.5},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/train.txt", "/images/train/train.txt"]
    }
image.png
Ouch something is still not working :/ training started running but failed here
tasibalint
@tasibalint
The problem is that i use vgg_16 template because the weight file starts with vgg but the vgg_16 is for classification and I want object detection. There are none vgg object detection templates for deepdetect right?
tasibalint
@tasibalint
Amma use this for now lets see. 'refinedet_512' "Images Convolutional network for object detection with VGG-16 base"
image.png
I dunno anymore I am going to wait until i get a response from you guys, what template i could use in order to train the detection 600 weight
Emmanuel Benazera
@beniz
@tasibalint yes this works too, use refinedet_512 and download https://www.deepdetect.com/models/pretrained/refinedet_512/VOC0712_refinedet_vgg16_512x512_iter_120000.caffemodel to use them as weights
you may need to lower batch_size and iter_size depending on your GPU. You can set iter_size to 4 to compensate. You'll need many more than 1000 iterations also, not sure what your data are, but 25000 is reasonable for a first run/try.
tasibalint
@tasibalint
I am using cpu only currently
Emmanuel Benazera
@beniz
try it... will be up to 50x slower
dgtlmoon
@dgtlmoon
@tasibalint you should write some app that will confirm and visualize the BBOX for you or use the deepdetect platform UI, also life is too short for waiting for CPU, you can rent GPU online for $1/hour :)
your bbox's are in the wrong format, follow the tutorial again, i think you have a mistake in the layout of your text files
tasibalint
@tasibalint

My image_32.txt files are build like:

<label> <xmin> <ymin> <xmax> <ymax>
601 22 32 673 756
601 because detection 600 had 600 classes in it and i wanted a new class
train.txt lines are :
[docker volume folder]/train/image_3.jpg- [docker volume folder]/train/image_3.txt
What do you mean wrong format?