Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 07 15:04
    Bycob commented #1448
  • Aug 05 21:21
    cchadowitz-pf synchronize #1448
  • Aug 05 20:12
    cchadowitz-pf synchronize #1448
  • Aug 05 20:07
    cchadowitz-pf synchronize #1448
  • Aug 04 12:34
    mergify[bot] review_requested #1450
  • Aug 04 12:34
    mergify[bot] review_requested #1450
  • Aug 04 12:34
    Bycob labeled #1450
  • Aug 04 12:34
    Bycob labeled #1450
  • Aug 04 12:34
    Bycob labeled #1450
  • Aug 04 12:34
    Bycob labeled #1450
  • Aug 04 12:34
    Bycob opened #1450
  • Aug 02 15:34
    cchadowitz-pf commented #1448
  • Aug 01 08:23
    Bycob commented #1448
  • Aug 01 08:22
    Bycob review_requested #1448
  • Jul 29 16:26
    Bycob synchronize #1448
  • Jul 29 16:16
    Bycob synchronize #1448
  • Jul 28 14:35
    mergify[bot] unlabeled #1449
  • Jul 28 14:35

    mergify[bot] on master

    feat: add deepdetect version to… (compare)

  • Jul 28 14:35
    mergify[bot] closed #1449
  • Jul 28 14:35
    mergify[bot] labeled #1449
Emmanuel Benazera
@beniz
same corresp.txt model if you can write it down and put it into the model directory. Only useful at inference though.
dgtlmoon
@dgtlmoon
@tasibalint should be in the tutorial there, i had no problems following it recently
but i'm just using an integer and keep my own map - in this way its worked for me
@beniz ever done some t-sne map visualisations of the FAISS index or similar?
Emmanuel Benazera
@beniz
Hi, no.
tasibalint
@tasibalint

I Still dont know where to define classes, I made a training where the image_index.txt files include 1 as label. I want to call 1 packages.
The training when through although the prediction response transmitts "cat": 1 back as class all the time. and the bboxes are way off, with always strating ymin: 0.0 and with a box width of 30 pixels. while my whole image is 2560 pixel. So one package is never 30pix wide.

Any idea where these problems might arrise?
I certainly used too few pictures, with 125 training images and 15 test images.
I checked the image_index.txt file and the bboxes are correct though.
I am using the SSD 300 reduced: VGG_ILSVRC_16_layers_fc_reduced.caffemodel
Are there like empty models which i can train from scratch does that bring better results?

Emmanuel Benazera
@beniz
Hi, if you post your API calls, it should be easy to fix them here
tasibalint
@tasibalint

Service Create:

             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "templates":"../templates/caffe/",
                 "weight": "VGG_ILSVRC_16_layers_fc_reduced.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                  "template":"ssd_300",
                  "nclasses":2,
                  "finetuning":true
                }
            },
        }

Training:

        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "test_split":0.1,
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":true,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5
                },
                "solver":{
                    "test_interval":250,
                    "iterations":500,
                    "base_lr":0.01,
                    "solver_type": "RMSPROP"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.01},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/bboxes/train.txt", "/images/test/bboxes/test.txt"]
    }
Emmanuel Benazera
@beniz
Thanks, hard to tell : are your bbox coordinates correct in the first place ? (I recommand to visualise them). We do this via the platform usually, and it's automated
Also, how many classes do you ahve ? 2 means background + a single class
Another thing, you don't need test_split since you are passing train.txt and test.txt already
tasibalint
@tasibalint
image.png
thanks for the information, the coordinates should be fine i tested some of them. But how do u test them on the platform? I couldn't find this ui from the guide
Emmanuel Benazera
@beniz
Hello @tasibalint , this requires installing the platform. If your bbox are fine, stay with your scripts, and try using the weights from this model instead of your current .caffemodel : https://www.deepdetect.com/models/detection_600/
also, put the prob of the distort object to 0.5
and put rgb to false
tasibalint
@tasibalint

Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:

{ 
             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "create_repository": true,
                 "templates":"../templates/caffe/",
                 "weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                    "template":"vgg_16",
                    "nclasses":602,
                    "finetuning":true
                }
            },
        }

Start training:

{ 
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":false,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5

                },
                "solver":{
                    "test_interval":250,
                    "iterations":1000,
                    "solver_type": "ADAM"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.5},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/train.txt", "/images/train/train.txt"]
    }
image.png
Ouch something is still not working :/ training started running but failed here
tasibalint
@tasibalint
The problem is that i use vgg_16 template because the weight file starts with vgg but the vgg_16 is for classification and I want object detection. There are none vgg object detection templates for deepdetect right?
tasibalint
@tasibalint
Amma use this for now lets see. 'refinedet_512' "Images Convolutional network for object detection with VGG-16 base"
image.png
I dunno anymore I am going to wait until i get a response from you guys, what template i could use in order to train the detection 600 weight
Emmanuel Benazera
@beniz
@tasibalint yes this works too, use refinedet_512 and download https://www.deepdetect.com/models/pretrained/refinedet_512/VOC0712_refinedet_vgg16_512x512_iter_120000.caffemodel to use them as weights
you may need to lower batch_size and iter_size depending on your GPU. You can set iter_size to 4 to compensate. You'll need many more than 1000 iterations also, not sure what your data are, but 25000 is reasonable for a first run/try.
tasibalint
@tasibalint
I am using cpu only currently
Emmanuel Benazera
@beniz
try it... will be up to 50x slower
dgtlmoon
@dgtlmoon
@tasibalint you should write some app that will confirm and visualize the BBOX for you or use the deepdetect platform UI, also life is too short for waiting for CPU, you can rent GPU online for $1/hour :)
your bbox's are in the wrong format, follow the tutorial again, i think you have a mistake in the layout of your text files
tasibalint
@tasibalint

My image_32.txt files are build like:

<label> <xmin> <ymin> <xmax> <ymax>
601 22 32 673 756
601 because detection 600 had 600 classes in it and i wanted a new class
train.txt lines are :
[docker volume folder]/train/image_3.jpg- [docker volume folder]/train/image_3.txt
What do you mean wrong format?

image.png
@beniz getting this error since i used the last model u posted
Emmanuel Benazera
@beniz
@tasibalint you need to set height and width to 512
tasibalint
@tasibalint
done that
Emmanuel Benazera
@beniz
no you cant add a class to detection 600, you need to specify the exact number of classes in your dataset, detection_600 only serves as a starting points, it's 600 classes are then lost to yours.
tasibalint
@tasibalint
So i will user nclasses 2
txt format: 1 xmin ymin xmax ymax?
Emmanuel Benazera
@beniz
correct!
tasibalint
@tasibalint
great amma do that
tasibalint
@tasibalint

image.png

some error as in here

tasibalint
@tasibalint
i fogot to change 300 to 512 in the training call as well as I had two train.txt in the train data, changed that to test.txt. Now i dont get the error atleast :D
Emmanuel Benazera
@beniz
good :)
tasibalint
@tasibalint
how do u guys use graphics card for docker?
Emmanuel Benazera
@beniz
Hi, we have special docker builds and they need to be used with the nvidia docker runtime. They are public.
dgtlmoon
@dgtlmoon
@tasibalint i'm using DD with a very highend graphics card over at paperspace.com works great
Danai Brilli
@danaibrilli
@dgtlmoon happy new year! could you provide info on how to do it because im new to these tools?
Danai Brilli
@danaibrilli
Hello guys! Im new to deepdetect and i'm trying to use the platform and the pre-trained models to predict from a local file, but when i give the path as a data url i get an error: "Service Input Error: vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)"
Any idea as to what I should do?
On the other hand, when I use an online image (its url) the api returns the predictions I wanted
Emmanuel Benazera
@beniz
Hi, the path to a file is relative to the filesystem of the docker platform, i.e. /opt/platform/data for instance
cchadowitz-pf
@cchadowitz-pf

hey @beniz - happy new year! long time since I've been looking through this stuff - tons of updates!! was just wondering if you're still building docker images with TF included. I'm trying to update our build process and I'm running into quite a number of issues with changes in DD affecting the build as well as changes in floopz/tensorflow_cc (and tensorflow itself).

Anyways, was just wondering if a (even CPU-only, for now) automated build is still happening with tensorflow that I can compare my build process too. Thanks in advance!