The deadline for submission of your models packaged as docker containers for round-2 has been extended to April 8th, 2018.
The grading server will expect the code to be a Binder compatible repository.
Predictions will be made on an arbitrary number of
mp3 files of at most
During the execution of the container, All the
mp3 files will be mounted at the location :
Execution of your container will be initiated by executing
More details of how to package your code as Binder compatible repositories, please read the documentation :
During the runtime, the container will not have access to the external internet, and will have access to :
and a timeout of
Note that, the results from your submitted containers will be announced at the end of Round-2.
repo2dockersets up the cudnn together. If you checked its working, is it compatible for the Theano backend as well? Also, for the GPU usage, I think we should run
nvidia-dockerduring the execution of the container. I forgot to add this to the submission+packaging guidelines. Do you want to add a section about that ? Happy to merge in a pull request.
repo2dockerwill prioritise a
Dockerfileif it sees one
Hi, thanks for the detailed instruction page! I was able to produce a binder-compatible repository, I think ;)
I still have 2 small questions, mainly because I don't have much experience with docker:
-v-flag, but not the output file. Does this mean that the output file is only stored in the image's filesystem?
Thank you for your efforts! :)
data/round_1/folder, this is what I do :
export IMAGE_NAME="submission_image_mimbres" export CONTAINER_NAME="container_name_mimbres" repo2docker --no-run \ --user-id 1001 \ --user-name crowdai \ --image-name $IMAGE_NAME \ --debug . CUDA_VISIBLE_DEVICES=1 docker run \ -v `pwd`/data/round_1/:/crowdai-payload \ -e TEST_DIRECTORY='/crowdai-payload/' \ -e OUTPUT_PATH='/tmp/output.csv' \ --name $CONTAINER_NAME \ -it $IMAGE_NAME \ /home/crowdai/run.sh docker cp $CONTAINER_NAME:/tmp/output.csv output_round_1.csv
os.path.join, and this lack of
ffmpegdependency in the conda environment.
it's our great pleasure to announce that the winners of the second round is the team formed by Jaehun Kim (@jaehun) and Minz Won (@minzwon)! As promised, you are invited to present your results at the Applied Machine Learning Days at EPFL in January 2019. Congrats! :)
You'll find all the scores on the leaderboard of the second round. The 6 systems that have been submitted are now open-source, so you can inspect how others did it! You'll find links to their repositories on our starter-kit repository, along with a summary table of the results. Finally, you can find more results and a discussion on the slides used to announce them.
We apologize for the delay. The main reason was that some of the submitted systems were not able to run properly, and we had to debug them with the authors. We didn't want any of you who made the effort to submit a self-contained system to be left out.
Finally, thanks again to all the participants! It was a pleasure to organize this challenge for you, and we hope you had fun and learned while participating.
Michaël, on behalf of the organizers, Mohanty, Sean, and Marcel.