Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    brianbrost
    @brianbrost
    cb1a443f9613f11388c1c1aac703f7f6 training_set_1.tar.gz
    c0508e75ea300fd0e04b385d83a4ff04 training_set_2.tar.gz
    66773b8a1f6d7a3034414afa223fe617 training_set_3.tar.gz
    brianbrost
    @brianbrost
    99a88fa87ffadc40d1777d002e830805 training_set_4.tar.gz
    a7193e27165ab849fb8e70156d9aa265 training_set_5.tar.gz
    brianbrost
    @brianbrost
    65d3b5731f1f735ccb8f7de1128c3354 training_set_6.tar.gz
    b2e6e6c0989b9995672cc92219ac4bd8 training_set_7.tar.gz
    1d716c77bcc64ca197372a89c7963d3d training_set_8.tar.gz
    58f1e0b1e3d2c91edef3903199f15e9a training_set_9.tar.gz
    @rstudent_gitlab please let me know if your checksums are different, and if you can extract any of the split files?
    RStudent
    @rstudent_gitlab
    I know i am being a bother. Could someone else please who has downloaded the file successfully confirm the md5sum please. At least i will know the corruption error is valid
    brianbrost
    @brianbrost
    I just downloaded those from the competition website, so the md5sums are the ones I would expect anyone else to get too. Are some of your checksums different, or are all of them different?
    @rstudent_gitlab
    RStudent
    @rstudent_gitlab
    Thank you so much for helping me out on this @brianbrost , @spMohanty . Well my md5sums are different for the main and the splits. For example for ~/workspace/music/train$ md5sum training_set_1.tar.gz
    9bdea6a8c4a9e47b47bace227bad252f training_set_1.tar.gz
    md5sum training_set_9.tar.gz
    b1b4393806a9d90711e619bfd08dd2f7 training_set_9.tar.gz
    So far i have tried , on click download, wget, CurL, no luck so far. Could you please give me actual size of these files. I will try clean downloads again and will monitor them. Maybe this time I will get them to work :) Thanks a lot for looking into it
    brianbrost
    @brianbrost

    @rstudent_gitlab

    with ls -l, I get the following sizes:

    6044161773 Dec 12 15:09 training_set_0.tar.gz
    6044989394 Dec 12 15:15 training_set_1.tar.gz
    6042349689 Dec 12 15:21 training_set_2.tar.gz
    6043674073 Dec 12 15:27 training_set_3.tar.gz
    6042105510 Dec 12 15:34 training_set_4.tar.gz
    6043173901 Dec 12 15:40 training_set_5.tar.gz
    6042906018 Dec 12 15:46 training_set_6.tar.gz
    6046086003 Dec 12 15:52 training_set_7.tar.gz
    6043512819 Dec 12 15:58 training_set_8.tar.gz
    6045573656 Dec 12 16:04 training_set_9.tar.gz
    60438525791 Dec 23 00:14 20181113_training_set.tar.gz

    Hope you manage to figure out the problem, and let me know if there's any other way I can help. Unfortunately I don't have any useful suggestions, except to try to re-download, which I guess is what you're trying right now. Please let us know if you sort out the problem!

    RStudent
    @rstudent_gitlab
    @brianbrost @spMohanty i cannot thank you enough. back in business using the wonder of modern technology: Aria2 https://aria2.github.io/
    Now time to use the other wonder of modern technology: deep net. Thanks again
    Joey
    @joychengzhaoyue_twitter
    Hi
    I’m wondering the deadline is Jan 4th is in which time zone? Thanks!
    brianbrost
    @brianbrost
    @spMohanty What timezone is the competition deadline currently set for?
    SP Mohanty
    @spMohanty
    @brianbrost : Its in UTC. Looking at the deadline its on Jan 4th, 12:00 UTC
    Sainath Adapa
    @sainathadapa
    Just want to get a confirmation: The objective is to predict if a track was played briefly (skip_2 being true). It is not to predict if the track was skipped (that is not_skipped being false). Am I correct?
    brianbrost
    @brianbrost
    @sainathadapa that's correct. Apologies for the delay in replying. I don't get a notification if you don't use @ my username.
    Sainath Adapa
    @sainathadapa
    thanks
    andres ferraro
    @andrebola__twitter
    @brianbrost @spMohanty I get the following error when submitting a new solution: Error : An error occurred (404) when calling the HeadObject operation: Not Found
    the previous submission got the same error so I think there might be some problem with the service
    SP Mohanty
    @spMohanty
    Dear @/all ,
    Some of the updates from our end did break a few things with the evaluator.
    I am trying to get those fixed now, and will respond back in an hour or so.
    SP Mohanty
    @spMohanty
    Okay, submissions are working again ! Sorry for the delay in the fix !
    Best of luck !
    Joey
    @joychengzhaoyue_twitter
    @spMohanty @brianbrost Hi, I'm wondering what's the portion of leaderboard public test in the entire test set? Thanks a lot!
    SP Mohanty
    @spMohanty
    @joychengzhaoyue_twitter : The current leaderboard uses only 50% of the whole test set. The final leaderboard will either use the other 50% of the test set, or the whole test set. This is to be determined after a discussion with @brianbrost . But we have both computed anyway for all the submissions.
    Joey
    @joychengzhaoyue_twitter
    Cool, Thanks a lot for the reply!
    olivierjeunen
    @olivierjeunen

    @spMohanty @brianbrost
    Hi, I have two small questions.
    First: when will the leaderboard be final (showing scores including the other 50% of the test set)?
    Second: the paper deadline is Friday January 11th 23:59 AoE, but when can we expect the paper reviews along with the notification?

    Thanks already for your time!

    brianbrost
    @brianbrost
    Hi, expect to have the paper notifications ready within 7 days, but we're meeting Monday and Tuesday to confirm the notification deadline
    @olivierjeunen Also, the leaderboard will hopefully be finalized tomorrow, but this is also pending discussions tomorrow.
    olivierjeunen
    @olivierjeunen
    Perfect, thank you for the clarification.
    brianbrost
    @brianbrost
    @olivierjeunen The results on the private leaderboard have now been published. Congratulations on your placement! We actually expect to have the paper reviews and notifications completed within 4 days of the submission deadline (i.e. end of day the following Tuesday)
    olivierjeunen
    @olivierjeunen
    @brianbrost, great. Thanks!
    Just to check, will post-deadline submissions still be evaluated on the test set as well?
    brianbrost
    @brianbrost

    @olivierjeunen I don't know if crowdAI will still evaluate submissions afterwards, but they won't count towards the final competition leaderboard position.

    @spMohanty Can you answer this question?

    Sainath Adapa
    @sainathadapa
    @brianbrost I'm using the following latex template for the paper \documentclass[sigconf]{acmart}. Can you tell me if this is the right one?
    brianbrost
    @brianbrost
    @sainathadapa yeah that's correct!
    Sainath Adapa
    @sainathadapa
    thanks
    Sainath Adapa
    @sainathadapa
    @brianbrost Sorry about these questions at last moment, but can you confirm if the baseline model submitted by you that scored 0.537 MAA / 0.742 FPA is the following: the skipping behavior of the last track in the first half of the session is used as the prediction for all the tracks in the second half
    olivierjeunen
    @olivierjeunen
    Dear @brianbrost , do you have any update regarding the workshop review and notification process?
    brianbrost
    @brianbrost
    @olivierjeunen Notifications will be sent out today!
    @sainathadapa Yes I can confirm that what you wrote for the baseline model is correct.
    zwarshavsky
    @zwarshavsky
    Hello folks! I am utilizing the full track_features csv listing 1.8 million songs and its features, however I am unable to utilize the track id to run a spotify API query. The Track ID format should adhere to the following e.g. 6rqhFgbbKwnb9MLmUQDhG6, however the format I am seeing on the csv is e.g. t_cf0164dd-1531-4399-bfa6-dec19cd1fedc. Can anyone assist? Thank you!
    brianbrost
    @brianbrost
    Hi @zwarshavsky, unfortunately for licensing reasons we we had to remove the public track IDs for the dataset. For this reason the IDs are internal to the dataset and can't be used in conjunction with the Spotify API. Sorry for the inconvenience!