These are chat archives for beniz/deepdetect

23rd
May 2017
sunatthegilddotcom
@sunatthegilddotcom
May 23 2017 04:17
@beniz 0.01 is helping the converge speed. Here is the final result
{"status":{"code":200,"msg":"OK"},"head":{"method":"/train","job":3,"status":"finished","time":261581.0},"body":{"model":{"repository":"/opt/models/ggnet"},"measure":{"train_loss":3.7744271755218508,"mcll":2.0745445063130498,"accp":0.584966832751539,"acc":0.584966832751539,"f1":0.613829906780978,"recall":0.6681118670119148,"iteration":119999.0,"precision":0.5677056331065806}}}
@beniz is this a correct precision?
sunatthegilddotcom
@sunatthegilddotcom
May 23 2017 04:28
@beniz , from your chart, the precision is about the same as AlexNet 57.1% vs. Inception v1 / GoogleNet Y Y BVLC / Google 67.9%.
alkollo
@alkollo
May 23 2017 05:12
@sunatthegilddotcom hello, do you have your measures at 10000 iterations please ? Also wich model did you use for your image classifier ? I don't see it in the mllib policy. precision of 0.56 seems pretty decent. you can also add acc-1 and acc-5 to your measures to have your top-1 and top-5 accuracy
alkollo
@alkollo
May 23 2017 05:35

@beniz
replacing constant by xavier in Googlenet model weight initialization make the loss decreasing throught iterations:

I0523 05:27:09.102052 4579 caffelib.cc:728] batch size=64
I0523 05:27:09.102087 4579 caffelib.cc:734] iteration=10500
I0523 05:27:09.102111 4579 caffelib.cc:734] train_loss=0.207247
I0523 05:27:09.102119 4579 caffelib.cc:734] mcll=4.14877
I0523 05:27:09.102124 4579 caffelib.cc:734] acc=0.238132
I0523 05:27:09.102131 4579 caffelib.cc:734] acc-5=0.71284
I0523 05:27:09.102138 4579 caffelib.cc:734] f1=0.189909
I0523 05:27:09.102143 4579 caffelib.cc:734] accp=0.238132
I0523 05:27:09.102149 4579 caffelib.cc:734] precision=0.173164
I0523 05:27:09.102154 4579 caffelib.cc:734] recall=0.210239
I0523 05:27:10.192970 4579 caffelib.cc:794] smoothed_loss=0.203191

Seems to be on the good way, could you confirm ?
Thanks

sunatthegilddotcom
@sunatthegilddotcom
May 23 2017 06:46
@alkollo Thank you very much. I do not have result at 10000. But I can test it again. I understand .56 is fairly decent. just curious what it should be.
alkollo
@alkollo
May 23 2017 07:02
@sunatthegilddotcom would be usefull for comparison with my measures. Anyway wich model do you use ?
sunatthegilddotcom
@sunatthegilddotcom
May 23 2017 07:24
@alkollo GoogleNet. Thanks
alkollo
@alkollo
May 23 2017 11:07
ok thanks, how many average images in classes of your imagenet set ?