Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    SP Mohanty
    @spMohanty
    @alejgomez : Its a local problem at your end. As I mentioned, you will need to ensure that no file in the git history should be greater than the per file limit of 10MB.
    I could help you debug this, if you zip your whole repository folder (including the .git folder), and send it to me somehow at : mohanty@aicrowd.com
    DougM
    @mengdong
    just curious, how frequent is the leaderboard updated?
    SP Mohanty
    @spMohanty
    leaderboard is updated on every new submission
    DougM
    @mengdong
    I have 2 successful evaluation but not seeing my name there
    SP Mohanty
    @spMohanty
    @mengdong : I do see your name on the leaderboard. You will have to scroll down after clickinmg on the Load More button
    DougM
    @mengdong
    Ah, I saw it too. Thanks. Wondering why floor and reward are 0. My result from evaluation says otherwise
    DougM
    @mengdong
    @spMohanty do you have access to my previous submissions?
    DougM
    @mengdong
    Ah, I misspell author field in aicrowd.json so it is different from my aicrowd id. I have changed that and will try again
    SP Mohanty
    @spMohanty
    @mengdong : Are you running the evaluations in debug mode ?
    DougM
    @mengdong
    yes
    debug mode won’t count?
    SP Mohanty
    @spMohanty
    Debug mode evaluations will always score 0
    you will have to remove the debug key
    and then submit
    DougM
    @mengdong
    I see. Thanks
    I have a seperate thread in discussion forum too, but is it possible to delay the launch of environment a bit?
    for example add wait x seconds in env.sh
    SP Mohanty
    @spMohanty
    Is it because of the TimeoutException ?
    DougM
    @mengdong
    yes
    SP Mohanty
    @spMohanty
    in which case you can pass in a timeout param in your env instantiation
    I thought someone had added it to the starter kit
    DougM
    @mengdong
    which file should I edit?
    DougM
    @mengdong
    I don’t know how to proceed since the submission I uploaded only control the agent, as a user, I have no control over the evaluation environment @spMohanty
    SP Mohanty
    @spMohanty
    @mengdong : Please edit your aicrowd.json file, and remove the "debug" key from it
    and make a new submission
    DougM
    @mengdong
    What about the timeoutException?
    SP Mohanty
    @spMohanty
    Ahh
    You can change this : https://github.com/Unity-Technologies/obstacle-tower-challenge/blob/master/run.py#L28
    to
    env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training, timeout=600)
    Note: Not tested the code. But it should work.
    if it does. Please do send in a pull request to the starter kit
    DougM
    @mengdong
    but this change is on the agent?
    SP Mohanty
    @spMohanty
    Yeah
    DougM
    @mengdong
    I have submit with the change you recommended, if works, I will create a pull request
    DougM
    @mengdong
    @spMohanty no it doesn’t work
    timeout_wait won’t help when evaluatioin pod launched before agent environment
    tduval
    @tduval
    Hi @spMohanty , can I have an update on this issue : https://gitlab.aicrowd.com/tduval/obstacle-tower-challenge/issues/6
    SP Mohanty
    @spMohanty
    @tduval : Done
    @mengdong : We are looking into this with @harperj and @awjuliani
    Arthur Juliani
    @awjuliani

    Dear Obstacle Tower Challenge participant(s),

    We have received an amazing response to the challenge these past two months, with over 200 teams signed up and the top participants reaching an average of over nine floors solved by their agents!

    *Due to some initial technical challenges related to automatic evaluation we have decided to extend the contest dates to ensure that all participants can make the best submissions possible. The new deadline for submission to Round 1 is April 30th. We have also extended the dates for Round 2, which will now be open for submissions from May 15th to July 15th. This will give everyone an additional month in each round to experiment with new ideas for training their agents. Please see the official rules here for more details: https://gitlab.aicrowd.com/unity/obstacle-tower-challenge-resources/blob/master/Rules.md

    In summary, the new challenge dates will be:

    Round 1: Feb 11 - Apr 30

    Round 2: May 15 - Jul 15

    Important: Please also note that you’ll need to agree to the updated terms with the new dates via the AICrowd webpage by clicking the challenge “participate” button again before making any new submissions to the challenge.

    DougM
    @mengdong
    @spMohanty @awjuliani I found out the Frame Per Second I got on a single node with multiple ObtacleTowerEnv is just too low, comparing with some other environment like gym atari or dmlab
    48 envs in parallel running on virutal GL, I am only getting ~1.5k fps
    SP Mohanty
    @spMohanty
    The platform is undergoing some maintenance at the moment, and some services might be unavaible.
    We will post an update here when that is resolved.
    SP Mohanty
    @spMohanty
    The platform is back online now
    The submissions which werre affected, will be requeued automatically
    Arthur Juliani
    @awjuliani

    Hi Obstacle Tower Challenge contestants,

    Per our contest rules, only the final submission made by each team will be accepted for evaluation at the end of each round. In the past, the leaderboard had incorrectly reflected the best submission. As of this morning, April 11, 2019, we have addressed this, and the leaderboard rankings now reflect the last submission (not the best). You may have noticed your ranking go up or down.

    Please be sure that your final submission before the end of each round reflects your best work, and be sure to re-submit that work if necessary.

    Thank you,
    Obstacle Tower Team

    KarolisRam
    @KarolisRam
    I guess the evaluation server is down - "Pod Scheduling Time 🕐 : 10005 secs"
    what was the main reason for this change to leaderboard?
    Arthur Juliani
    @awjuliani
    @KarolisRam It was changed to align with the written rules for the contest.
    Alex Gomez
    @alejgomez
    Hello is it normal to wait over one hour in evaluation?
    State : Evaluation Pending ⌛
    Total Execution Time ⌛ : 4148s
    Pod State : Pending
    Pod Scheduling Time 🕐 : 3950 secs