This effect occurs from time to time, but it's not always visible in such an obviously semantic way. We could argue that it's not a "real" adversarial example, because the human might be confused as well!
I had a lot of RETRIES_EXCEEDED errors during the submission when they worked well locally. I still want to see what was the problem. Can I get the logs? @ffledgling
We will release an updated leaderboard with the placement of all submissions in the next week
If you are coming to NIPS and would like to present a poster at our workshop, please send your team ID together with a short abstract to email@example.com
Thanks to everyone for making this competition an exciting adventure!
Congratulations to all the winners, I look forward to reading about your defence and attack strategies. Also, a huge thank you to the organizers, it is amazing how you kept the competition running with this much participants and submissions. Hopefully, till next year!
Congratulations to everyone, and thanks for hosting this cool competition! Thanks also to the crowdAI team - after this stress test you're probably well prepared to hold the next one ;)
Looking forward to NIPS!
Looking forward to reading top solutions of each track. At the same time, I hope organizers announce the final results of all teams soon.
Just a reminder: I am looking forward to the updated leaderboard;) @wielandbrendel
When will the organizers update the leaderboard :(
@walegahaha : Hopefully tomorrow :angel: ! @MasterScrat is point on that one ;)
The final results in the leaderboard is not showing up correctly for me.
@LarsHoldijk : its not updated yet
How about the progress of leaderboard update now ;)
Leaderboard updated ;)
@MasterScrat : Can you confirm if the data updated on the leaderboard is indeed the latest version : and then change the round name from tentative to final :D
Thanks for your hard work!
By the way, is the leaderborad in targeted attack track not updated yet?
Wow, these numbers are low. I guess it's super hard to be robust against untargeted attacks. This dataset was quite unforgiving!
@csy530216 : It is now.
@spMohanty Thank you for your hard work!
Some of the Robust Model Track submissions were missing
The leaderboards are now final!
Thanks! Looking forward to the presentations.
@MasterScrat Could you check the score
@kyungyul.kim88_gitlab are you noticing any problem?
@MasterScrat please check and remove the qualified participant score on round1 leaderboard for clearing it.(zhou-liang)
Hi~ Could anyone in the defense side provide your code (make the gitlab repo public)? I want to visualize the difference of gradient of standard models and adversarially robust models, but the result of the resnet50_alp baseline seems a bit strange;)