Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 15:19
    sileht edited #849
  • 15:17
    fantes synchronize #851
  • 15:16
    fantes opened #851
  • 15:06
    sileht synchronize #849
  • 14:33
    fantes review_requested #848
  • 14:33
    fantes review_requested #848
  • 14:33
    fantes review_requested #848
  • 14:31
    fantes synchronize #848
  • 14:20
    sileht synchronize #847
  • 14:19
    sileht review_requested #847
  • 14:19
    sileht edited #850
  • 14:19
    sileht synchronize #850
  • 14:18
    sileht opened #850
  • 14:02
    beniz labeled #849
  • 14:01
    beniz labeled #848
  • 14:01
    beniz labeled #848
  • 13:16
    sileht opened #849
  • 12:05

    sileht on master

    fix(clang-format): signed/unsig… (compare)

  • 11:40

    sileht on master

    fix(clang-format): signed/unsig… (compare)

  • 11:11
    fantes synchronize #848
dgtlmoon
@dgtlmoon
maybe it's not taking into consideration if the object is zoomed in/out, small/big relative to the image... I saw something in the deepdetect API docs for SSD scale object hmmmm?
aAhhuh! ssd_neg_overlap float yes N/A max overlap of negative samples with positive samples (bbox), between 0 and 1, e.g. 0.5
right !
Emmanuel Benazera
@beniz
don't touch that
what is wrong, the predictions ?
dgtlmoon
@dgtlmoon
yes
Emmanuel Benazera
@beniz
do you have examples ?
dgtlmoon
@dgtlmoon
yes, see above
Emmanuel Benazera
@beniz
screenshot is too small, I dont see anything
dgtlmoon
@dgtlmoon
ahh ok, sorry, two sec
image.png
dark area = original, colour/light = predicted
Emmanuel Benazera
@beniz
I'm not sure I understand exactly, I guess you need to give more thorough detail, i.e. is that tshirt not detected, and how many of these in % of your test set + your API prediction call + info on your model.
dgtlmoon
@dgtlmoon
image.png
Emmanuel Benazera
@beniz
At minima, if map is 1.0, then this means there's a box that intersects the ground truth with more than 50% area, so if the images above are from your test set, you are doing something wrong. And you can start by playing with confidence_threshold value at predict time.
dgtlmoon
@dgtlmoon
confidence_threshold is already 0.95
that image is not in the test set
Emmanuel Benazera
@beniz
lower it to 0.3
dgtlmoon
@dgtlmoon
yeah maybe the highest 'confidence' result isnt necessarily the right one?, checking now
Prediction service request {"service":"tshirt_info_detection","parameters":{"input":{"width":300,"height":300},"output":{"confidence_threshold":0.3,"bbox":true},"mllib":{"gpu":false}},"data":"base64...."}
{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"tshirt_info_detection","time":1666.0},"body":{"predictions":[{"classes":[{"bbox":{"ymax":461.73895263671877,"xmax":786.7083129882813,"xmin":111.97721099853516,"ymin":0.0},"cat":"1","prob":0.9653725028038025},{"last":true,"bbox":{"ymax":303.8266296386719,"xmax":585.5281372070313,"xmin":229.73147583007813,"ymin":9.2813138961792},"cat":"1","prob":0.3033854067325592}],"uri":"0"}]}}
Emmanuel Benazera
@beniz
then you can pick the first one
dgtlmoon
@dgtlmoon
Yes, the first one is what i'm already using
that's the BBOX that's far too large
Emmanuel Benazera
@beniz
ah ok
try the second one out of curiosity
and what model is this
dgtlmoon
@dgtlmoon
"xmax":786,"xmin":111 butI would expect like xmin=214, xmax=600
ssd_300 is the model i'm using
Emmanuel Benazera
@beniz
mmm, try refinedet, maybe one day if your dataset is public, just put it somewhere.
dgtlmoon
@dgtlmoon
second one = the xmax/xmin is perfect, but the ymax is totally off (ymin is OK)
Yeah its sort of public data from the website anyway
Check out this debug mode, You can see the prob bboxes
image.png
so... unsure...
yeah... ok, trying refinedet_512, will paste in the predictions, perhaps the images have too much geography going on for ssd
Emmanuel Benazera
@beniz
or you are overfitted to the bone
dgtlmoon
@dgtlmoon
I think that's more likely.... not sure how to solve that tho
I checked all train+test images with MD5 to verify i'm not accidently reusing a train image and test - all fine, I tried lowering the learning rate.. hmm added a LOT more test data too
I think the test is too similar to the train, even tho they are not exactly the same == overfitting
         "mllib":{
           "template":"ssd_300",
           "nclasses": 2,
           "finetuning":true,
          "rotate": true,
          "mirror": true,
          "noise": {
            "all_effects": true,
            "prob": 0.01
          },
          "distort": {
            "all_effects": true,
            "prob": 0.01
          },
          "gpu": true
        }
rotate, mirror, some noise and distort all enabled..
not sure what else to try
I try prob with 0.35 on training effects :) :) :)
dgtlmoon
@dgtlmoon
no difference.. "map": 1.000000011175871, at iteration 500
dgtlmoon
@dgtlmoon
....time to verify my setup, using the same training calls, but with a different dataset.....
@beniz what about using dropout to remove some features in the network?
i'll try
dgtlmoon
@dgtlmoon
i can put my data on kaggle if its useful
i think my data's bbox is usually within about 80% of the actual image... maybe thats the problem