Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 23 11:26
    mergify[bot] unlabeled #1476
  • Nov 23 11:26

    mergify[bot] on master

    fix(tensorrt): clarify conditio… (compare)

  • Nov 23 11:26
    mergify[bot] closed #1476
  • Nov 23 11:26
    mergify[bot] labeled #1476
  • Nov 22 16:49
    mergify[bot] review_requested #1476
  • Nov 22 16:49
    mergify[bot] review_requested #1476
  • Nov 22 16:48
    Bycob labeled #1476
  • Nov 22 16:48
    Bycob labeled #1476
  • Nov 22 16:48
    Bycob opened #1476
  • Nov 10 11:55
    mergify[bot] unlabeled #1475
  • Nov 10 11:55

    mergify[bot] on master

    feat(torch): update torch to 1.… (compare)

  • Nov 10 11:55
    mergify[bot] closed #1475
  • Nov 10 11:55
    mergify[bot] labeled #1475
  • Nov 09 13:55
    mergify[bot] unlabeled #1475
  • Nov 09 13:39
    mergify[bot] synchronize #1475
  • Nov 09 13:39
    mergify[bot] labeled #1475
  • Nov 08 10:54
    fantes commented #1475
  • Nov 08 09:19
    Bycob commented #1475
  • Nov 08 09:18
    mergify[bot] review_requested #1475
  • Nov 08 09:18
    mergify[bot] review_requested #1475
dgtlmoon
@dgtlmoon
dgtlmoon
@dgtlmoon
yeah maybe try different version of faiss, hmm
dgtlmoon
@dgtlmoon
oh man simsearch GPU training is fast x)
question, should "train_samples": 10000, be the TOTAL size of all of your images you expect to train in the set? or just a nice localised number for where it will compare against?
say i have 150k images, maybe 20,000 might be a good choice?
dgtlmoon
@dgtlmoon
i guess depends on how much time VS accuracy you want
dgtlmoon
@dgtlmoon
ahhh yeahhhhhhhhh 0.070s query time for simsearch x) yesss
dgtlmoon
@dgtlmoon
I would <3 if https://www.deepdetect.com/server/docs/api/ was on github so I can add some improvements
2 replies
dgtlmoon
@dgtlmoon
max(ninvertedlist/50,2) what does invertedlist mean in this case?
tasibalint
@tasibalint
image.png
Anyone an idea what this could mean?
sry i t possible that class 1 train images are 36 and the other are 44 and i use a batch size of 5, and the test images are 6 for each classe i am gonna fix that first
Emmanuel Benazera
@beniz
@tasibalint this message means that the mean_valuefile is wrong somehow, not sure what you did exactly, mind sharing the API calls / steps you are using ?
Emmanuel Benazera
@beniz
or are your image b&w ?
tasibalint
@tasibalint
I have done the cats_dogs tutorial, and now out of desperation i started the cats_dogs training with my images and the training is running, soo apperantly the .cafemodel i was using wasn't compatible or something.
I was not using the model from:
"init": "https://deepdetect.com/models/init/desktop/images/classification/ilsvrc_googlenet.tar.gz",
But from :
https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
same name same size but different D: anyways i am training now :D
Emmanuel Benazera
@beniz
yeah, you need to use ours :)
tasibalint
@tasibalint
Hey people how do u train bbox detection model and where do u get the model from?
Emmanuel Benazera
@beniz
@tasibalint maybe too broad of a question... you can train via API or platform. What are you trying to achieve ?
tasibalint
@tasibalint
I want to train vai API, I have created with lableImg xml files with bounding boxes for each picture. But i dont understand how I tell the api to use those xml files

Service create

             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "templates":"../templates/caffe/",
                 "weight": "SE-ResNet-50.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":224,
                    "height":224,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                  "template":"resnet_50",
                  "nclasses":3,
                  "finetuning":true
                }
            },
        }
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "test_split":0.1,
                "shuffle":true,
                "height":224,
                "width":224,
                "db":true,
                "rgb":true,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":3,
                    "test_batch_size":3
                },
                "solver":{
                    "test_interval":500,
                    "iterations":1000,
                    "base_lr":0.001,
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.01},
                "bbox": true
            },
            "output":{
                 "measure":["acc","mcll","f1"],
            }
        },
        "data":["/images/train/"]
    }

Getting the error " auto batch size set to zero:" but i dont get it where it is set to zero

image.png
On the platform object detection guide i see this
is there a toll to convert xml files into what ever this txt is for a format, without doing it manually?
Emmanuel Benazera
@beniz
this is not a template for object detection, you'd need to use one of the ssd templates and get a dataset in proper format, see https://www.deepdetect.com/platform/docs/object-detection/ it has the format description
tasibalint
@tasibalint
thank you i am still unsure what the difference between mllib, model and templates are
Any ideas in how i can see what's in the:
"templates":"../templates/caffe/",
folder or if there are any other templates then caffe. is this where the docker is installed and if yes, is there a generic path where the docker is isntalled at?
tasibalint
@tasibalint
where do i define the classes for the ssd_300 model? i have <label> <xmin> <ymin> <xmax> <ymax> where label is a number. but how do i define that the number is a class, for classification models there is the corresp.txt, but for detection models?
Emmanuel Benazera
@beniz
same corresp.txt model if you can write it down and put it into the model directory. Only useful at inference though.
dgtlmoon
@dgtlmoon
@tasibalint should be in the tutorial there, i had no problems following it recently
but i'm just using an integer and keep my own map - in this way its worked for me
@beniz ever done some t-sne map visualisations of the FAISS index or similar?
Emmanuel Benazera
@beniz
Hi, no.
tasibalint
@tasibalint

I Still dont know where to define classes, I made a training where the image_index.txt files include 1 as label. I want to call 1 packages.
The training when through although the prediction response transmitts "cat": 1 back as class all the time. and the bboxes are way off, with always strating ymin: 0.0 and with a box width of 30 pixels. while my whole image is 2560 pixel. So one package is never 30pix wide.

Any idea where these problems might arrise?
I certainly used too few pictures, with 125 training images and 15 test images.
I checked the image_index.txt file and the bboxes are correct though.
I am using the SSD 300 reduced: VGG_ILSVRC_16_layers_fc_reduced.caffemodel
Are there like empty models which i can train from scratch does that bring better results?

Emmanuel Benazera
@beniz
Hi, if you post your API calls, it should be easy to fix them here
tasibalint
@tasibalint

Service Create:

             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "templates":"../templates/caffe/",
                 "weight": "VGG_ILSVRC_16_layers_fc_reduced.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                  "template":"ssd_300",
                  "nclasses":2,
                  "finetuning":true
                }
            },
        }

Training:

        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "test_split":0.1,
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":true,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5
                },
                "solver":{
                    "test_interval":250,
                    "iterations":500,
                    "base_lr":0.01,
                    "solver_type": "RMSPROP"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.01},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/bboxes/train.txt", "/images/test/bboxes/test.txt"]
    }
Emmanuel Benazera
@beniz
Thanks, hard to tell : are your bbox coordinates correct in the first place ? (I recommand to visualise them). We do this via the platform usually, and it's automated
Also, how many classes do you ahve ? 2 means background + a single class
Another thing, you don't need test_split since you are passing train.txt and test.txt already
tasibalint
@tasibalint
image.png
thanks for the information, the coordinates should be fine i tested some of them. But how do u test them on the platform? I couldn't find this ui from the guide
Emmanuel Benazera
@beniz
Hello @tasibalint , this requires installing the platform. If your bbox are fine, stay with your scripts, and try using the weights from this model instead of your current .caffemodel : https://www.deepdetect.com/models/detection_600/
also, put the prob of the distort object to 0.5
and put rgb to false
tasibalint
@tasibalint

Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:

{ 
             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "create_repository": true,
                 "templates":"../templates/caffe/",
                 "weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                    "template":"vgg_16",
                    "nclasses":602,
                    "finetuning":true
                }
            },
        }

Start training:

{ 
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":false,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5

                },
                "solver":{
                    "test_interval":250,
                    "iterations":1000,
                    "solver_type": "ADAM"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.5},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/train.txt", "/images/train/train.txt"]
    }
image.png
Ouch something is still not working :/ training started running but failed here
tasibalint
@tasibalint
The problem is that i use vgg_16 template because the weight file starts with vgg but the vgg_16 is for classification and I want object detection. There are none vgg object detection templates for deepdetect right?
tasibalint
@tasibalint
Amma use this for now lets see. 'refinedet_512' "Images Convolutional network for object detection with VGG-16 base"
image.png