Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 17 13:47
    iamdroppy commented #1451
  • Aug 17 12:50
    iamdroppy commented #1451
  • Aug 17 12:46
    iamdroppy commented #1451
  • Aug 17 12:44
    iamdroppy commented #1451
  • Aug 16 16:14
    cchadowitz-pf synchronize #1452
  • Aug 16 16:13
    cchadowitz-pf synchronize #1452
  • Aug 16 16:06
    cchadowitz-pf synchronize #1452
  • Aug 16 14:27
    cchadowitz-pf commented #1452
  • Aug 16 14:27
    cchadowitz-pf commented #1452
  • Aug 16 14:25
    cchadowitz-pf synchronize #1452
  • Aug 16 11:09
    mergify[bot] unlabeled #1453
  • Aug 16 11:09

    mergify[bot] on master

    fix: prevent a buggy optimizatiā€¦ (compare)

  • Aug 16 11:09
    mergify[bot] closed #1453
  • Aug 16 09:14
    beniz commented #1452
  • Aug 16 09:14
    mergify[bot] synchronize #1453
  • Aug 16 09:14
    mergify[bot] labeled #1453
  • Aug 16 09:13
    beniz assigned #1453
  • Aug 15 05:16
    iamdroppy commented #1451
  • Aug 12 19:02
    cchadowitz-pf review_requested #1452
  • Aug 12 19:02
    cchadowitz-pf review_requested #1452
Emmanuel Benazera
@beniz
and put rgb to false
tasibalint
@tasibalint

Can train now with detection_600 amma post my calls so people might find it in the future:
Create Service JSON:

{ 
             "description":"generic image detection  service",
             "model":{
                 "repository":"/images/models/packages_detc",    
                 "create_repository": true,
                 "templates":"../templates/caffe/",
                 "weight": "VGG_openimage_pretrain_ilsvrc_res_pred_openimage_detect_v2_SSD_openimage_pretrain_ilsvrc_res_pred_300x300_iter_1200000.caffemodel"

            },
            "mllib":"caffe",
            "type":"supervised",
            "parameters":{
                "input":{
                    "connector":"image",
                    "width":300,
                    "height":300,
                    "db": true,
                    "bbox": true
                },
                "mllib":{
                    "template":"vgg_16",
                    "nclasses":602,
                    "finetuning":true
                }
            },
        }

Start training:

{ 
        "service":"packages_detc",
        "async":true,
        "parameters":{
            "input":{
                "connector":"image",
                "shuffle":true,
                "height":300,
                "width":300,
                "db":true,
                "rgb":false,
            },
            "mllib":{
                "gpu":false,
                "mirror":true,
                "net":{
                    "batch_size":5,
                    "test_batch_size":5

                },
                "solver":{
                    "test_interval":250,
                    "iterations":1000,
                    "solver_type": "ADAM"
                },
                "noise":{"all_effects":true, "prob":0.001},
                "distort":{"all_effects":true, "prob":0.5},
                "bbox": true
            },
            "output":{
                 "measure":["map"],
            }
        },
        "data":["/images/train/train.txt", "/images/train/train.txt"]
    }
image.png
Ouch something is still not working :/ training started running but failed here
tasibalint
@tasibalint
The problem is that i use vgg_16 template because the weight file starts with vgg but the vgg_16 is for classification and I want object detection. There are none vgg object detection templates for deepdetect right?
tasibalint
@tasibalint
Amma use this for now lets see. 'refinedet_512' "Images Convolutional network for object detection with VGG-16 base"
image.png
I dunno anymore I am going to wait until i get a response from you guys, what template i could use in order to train the detection 600 weight
Emmanuel Benazera
@beniz
@tasibalint yes this works too, use refinedet_512 and download https://www.deepdetect.com/models/pretrained/refinedet_512/VOC0712_refinedet_vgg16_512x512_iter_120000.caffemodel to use them as weights
you may need to lower batch_size and iter_size depending on your GPU. You can set iter_size to 4 to compensate. You'll need many more than 1000 iterations also, not sure what your data are, but 25000 is reasonable for a first run/try.
tasibalint
@tasibalint
I am using cpu only currently
Emmanuel Benazera
@beniz
try it... will be up to 50x slower
dgtlmoon
@dgtlmoon
@tasibalint you should write some app that will confirm and visualize the BBOX for you or use the deepdetect platform UI, also life is too short for waiting for CPU, you can rent GPU online for $1/hour :)
your bbox's are in the wrong format, follow the tutorial again, i think you have a mistake in the layout of your text files
tasibalint
@tasibalint

My image_32.txt files are build like:

<label> <xmin> <ymin> <xmax> <ymax>
601 22 32 673 756
601 because detection 600 had 600 classes in it and i wanted a new class
train.txt lines are :
[docker volume folder]/train/image_3.jpg- [docker volume folder]/train/image_3.txt
What do you mean wrong format?

image.png
@beniz getting this error since i used the last model u posted
Emmanuel Benazera
@beniz
@tasibalint you need to set height and width to 512
tasibalint
@tasibalint
done that
Emmanuel Benazera
@beniz
no you cant add a class to detection 600, you need to specify the exact number of classes in your dataset, detection_600 only serves as a starting points, it's 600 classes are then lost to yours.
tasibalint
@tasibalint
So i will user nclasses 2
txt format: 1 xmin ymin xmax ymax?
Emmanuel Benazera
@beniz
correct!
tasibalint
@tasibalint
great amma do that
tasibalint
@tasibalint

image.png

some error as in here

tasibalint
@tasibalint
i fogot to change 300 to 512 in the training call as well as I had two train.txt in the train data, changed that to test.txt. Now i dont get the error atleast :D
Emmanuel Benazera
@beniz
good :)
tasibalint
@tasibalint
how do u guys use graphics card for docker?
Emmanuel Benazera
@beniz
Hi, we have special docker builds and they need to be used with the nvidia docker runtime. They are public.
dgtlmoon
@dgtlmoon
@tasibalint i'm using DD with a very highend graphics card over at paperspace.com works great
Danai Brilli
@danaibrilli
@dgtlmoon happy new year! could you provide info on how to do it because im new to these tools?
Danai Brilli
@danaibrilli
Hello guys! Im new to deepdetect and i'm trying to use the platform and the pre-trained models to predict from a local file, but when i give the path as a data url i get an error: "Service Input Error: vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)"
Any idea as to what I should do?
On the other hand, when I use an online image (its url) the api returns the predictions I wanted
Emmanuel Benazera
@beniz
Hi, the path to a file is relative to the filesystem of the docker platform, i.e. /opt/platform/data for instance
cchadowitz-pf
@cchadowitz-pf

hey @beniz - happy new year! long time since I've been looking through this stuff - tons of updates!! was just wondering if you're still building docker images with TF included. I'm trying to update our build process and I'm running into quite a number of issues with changes in DD affecting the build as well as changes in floopz/tensorflow_cc (and tensorflow itself).

Anyways, was just wondering if a (even CPU-only, for now) automated build is still happening with tensorflow that I can compare my build process too. Thanks in advance!

Emmanuel Benazera
@beniz
Hi @cchadowitz-pf thanks, happy solar roundabout to you. TF is completely unused and unmaintained on our side. Sorry about that, but we ended up having no use case for it. Libtorch is the production backend now.
cchadowitz-pf
@cchadowitz-pf
I see, that's kind of what I figured :+1: are you still using caffe for anything? I know it's the default backend for builds, but curious if you're actually using it.
Also, do you have a reliable pipeline for converting models? I'd love to convert some of our models but there seems to be quite a bit of manual effort involved (not to mention validating/testing afterwards). If you have any sort of pipeline I'd be very interested :)
Emmanuel Benazera
@beniz
@cchadowitz-pf we don't train new models with caffe, the 'legacy' models still running with our customers are in practice running the tensorrt backend (that got a nice upgrade recently in DD), thus abstracting away the initial backend. In your case it's a bit different since you are running CPU, correct ?
I believe the best pipeline for CPU would be to convert to ONNX and use onnxruntime. You could use onnxruntime directly, or we could add it to DD. It's actually on our optional todolist, but as we are driven by our customers and they never asked for CPU we didn't got it into DD. Not difficult for inference though.
Let me know your thoughts.
cchadowitz-pf
@cchadowitz-pf
I see :+1: we definitely still have CPU in some cases unfortunately. Any reason you suggest using onnxruntime instead of libtorch? Is it just more efficient without a GPU?
In practice, it sounds like you're more often training models and then deploying, rather than converting models - is that accurate? It sounds like I'll need to put together a methodology to compare and validate models before/after converting them. I'm hoping to not have to do it manually but it will probably come down to that....
and yes! many new and awesome things lately in DD - that's part of why I'm trying to get our build up to date again hah. Have yet to find a good way to continue to use the TF openimages pretrained model outside of TF, however.
Emmanuel Benazera
@beniz
Hi @cchadowitz-pf , suggesting onnxruntime since you are asking about converting your tf models: converting to onnx seems more appropriate than TF to pytorch, but you may want to double check. For simple models such as openimage, this seems a no brainer, but in reality... who know ?!
onnxruntime reports good performances also.
You are correct we are always training custom models, that's actually our business. If the openimage model is your sole TF dependency, you may be able to convert it or find a more recent and better one for torch.
cchadowitz-pf
@cchadowitz-pf
:+1: awesome thanks! I had actually found that same page, but saw this note: These are unpruned models with just the feature extractor weights, and may not be used without retraining to deploy in a classification application.
I'll keep looking around but may pop back in here for more thoughts :)
Emmanuel Benazera
@beniz
Sad indeed... You may have enough data to train your own if the results have been recorded.
cchadowitz-pf
@cchadowitz-pf
:+1:
dgtlmoon
@dgtlmoon

@beniz hey, happy new year šŸ˜Š simsearch question, I have the object detector working nice for the tshirt artwork, and you said that training the ResNet classifier model with more categories (say 200 or so different bands tshirt artwork) can improve the search results, however I've found that training the classifier on only 10 or so band names seems to give the simsearch better results.. whats your thoughts? note - a single band might have many different designs, so maybe this is the problem..

so trained on 10 categories, the resnet simsearch model is kinda OK, but not brilliant
on 200 categories, its sometimes workable but not great

Do you have any other tips for tuning simsearch? fortunately the 'domain' is all kinda similar, printed artwork on cloth.. maybe some image filter or tuning option?

maybe its a FAISS index tuning issue too