Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Łukasz Kidziński
    @kidzik
    @Scitator we will have 1 invite (free conference ticket) for each of the top three teams.
    Sergey Kolesnikov
    @Scitator
    does anyone already here?
    @spMohanty btw, what's the plan for this year crowdAI session?
    SP Mohanty
    @spMohanty
    @Scitator : I will be at AMLD from Sunday evening.
    And I will send out an email shortly. But I do have some slots still left in the crowdAI session, where you can briefly present your solution.
    Anyone else from the AI4Prosthetics challenge at AMLD this year ?
    yobobobo
    @yobobobo3_twitter
    Hi, guys, any news about the challenge this year? are we going to held the 3rd year's competition of musculoskeletal model ?
    SP Mohanty
    @spMohanty
    Yes we are !
    Official Announcements coming up soon :angel:
    yobobobo
    @yobobobo3_twitter
    Great News! Another interesting competition related to academic research !
    SP Mohanty
    @spMohanty
    ======
    Announcing the Learning2Move challenge for NeurIPS 2019 : https://www.aicrowd.com/challenges/neurips-2019-learn-to-move-walk-around
    brokenBrain
    @brokenBrain
    I've been trying to understand what the target velocity map means in (http://osim-rl.stanford.edu/docs/nips2019/environment/), but I'm having a difficult time. I'm not able to understand the terse description given in the linked webpage. Can anyone point me to more information about the velocity map? It looks like it indicates the target velocity, but I don't understand why the target velocity has to be a map (instead of a single vector value).
    Łukasz Kidziński
    @kidzik
    Hi @brokenBrain, the velocity map is there to help the agent plan ahead. Using vectors ahead of the agent you may prepare a turn etc.
    brokenBrain
    @brokenBrain
    can we assume that the distribution of the velocity vectors in the global velocity map is constant? I.e. it looks like the velocity fields are a 2d gaussian with constant variance but varying mean (final location). So can we assume that the velocity field will always have the constant "shape" of the gaussian? If that's the case, then I think the velocity map might not be necessary.
    Seungmoon Song
    @SeungmoonS_twitter
    Hi, @brokenBrain . A new simulation creates a new velocity map where the final location is generated by a 2d gaussian, but the velocities toward that location varies independently to the final location. In addition, reaching the final location may not be the end of the task ;)
    HAMIDULLAH Yasser
    @yhamidullah

    Hi,
    Can someone help me on creating team in this challenge? I have some collaborators but we don't really know how to make a team.

    thanks in advance!

    Seungmoon Song
    @SeungmoonS_twitter
    Hi, @yhamidullah. You are free to make your team as you want. The only restriction is that only one account from your team can proceed to Round 2 (if you pass Round 1).
    HAMIDULLAH Yasser
    @yhamidullah
    thank you lots @SeungmoonS_twitter
    SP Mohanty
    @spMohanty
    I noticed that many submissions were failing because of the lack of msgpack installed. I have created an issue (stanfordnmbl/osim-rl#191) and will include it as a dependency in the next release. In the meantime please do add msgpack and msgpack_numpy to your conda environment (pip install msgpack msgpack-numpy) before exporting the environment.yml
    Another note is about the debug mode.
    If you add ”debug”:true to your aicrowd.json, then the evaluator will give you access to your evaluation logs if your evaluation fails. This can make debugging cycles much easier.
    HAMIDULLAH Yasser
    @yhamidullah
    Hi, I have an issue on setting the project=True and obs_as_dict = True.
    When I try locally project=True and obs_as_dict = True, after flattening I get an array of shape (339,)
    Sending this for evaluation, I surprisingly get an array of shape (710). Is there an explanation of this or a workaround if possible???
    thank you in advance!
    Seungmoon Song
    @SeungmoonS_twitter
    Hi, @yhamidullah . Thanks for your feedback. We are working on a new submission/evaluation process to resolve the issues and complexity of the current one. We will announce it through the competition page when ready.
    HAMIDULLAH Yasser
    @yhamidullah
    @SeungmoonS_twitter Thank you lots, I tried so many and finally realize by activating the debug mode.
    thanks!
    HAMIDULLAH Yasser
    @yhamidullah
    Hi,
    Hi, is the simulator on crowdai for evaluation set project=True and obs_as_dict = True as you mentioned (https://www.aicrowd.com/challenges/neurips-2019-learn-to-move-walk-around) in clarification?
    Because I get the inverse output during evaluation, like it is set as project=False and obs_as_dict = False
    @spMohanty @SeungmoonS_twitter
    asdspal
    @asdspal
    Hi,
    Just submitted the random test controller. Didn't recieve any notification on succesfull submission. Is it normal?
    HAMIDULLAH Yasser
    @yhamidullah
    Hi, @asdspal sometimes it takes time to evaluate!
    asdspal
    @asdspal
    Thanks, submission is showing on the leaderboard.
    Ryan Amaral
    @Ryan-Amaral
    I decided to try out the new submission method, as I have some difficulties with nvidia docker, but I get this error from it: "ServerErrorPrint Wrong client version. Please update to the new version. Read more on https://github.com/stanfordnmbl/osim-rl/docs 400
    Wrong client version. Please update to the new version. Read more on https://github.com/stanfordnmbl/osim-rl/docs". I find it unclear on where to go in the docs, and it was ran from a fresh clone from github, so I'm not sure why it would be out of date.
    Ryan Amaral
    @Ryan-Amaral
    Oops nevermind, turns out I just had to update my opensim-rl environment.
    Hmd
    @eghbalz
    Hi, it seems that the ddpg.keras-rl.py is not compatible with the new env (walk around sim). Is there any change to the env required?
    I am guessing the problem is coming from the observation dictionary. Is there a (obs_as_dict=True) option for L2M2019 env?
    Hmd
    @eghbalz
    @eghbalz Alright, I figured I had to change the default obs_as_dict value to False in osim.py, as it is not updating apparently if you pass it in reset.
    brokenBrain
    @brokenBrain
    In difficulty=0, is the velocity also constant between episodes?
    Seungmoon Song
    @SeungmoonS_twitter
    Sergey Kolesnikov
    @Scitator

    Hi,

    What?
    I just want to share with you my starter kit for this year competition - distributed DDPG agent based on Catalyst.RL framework.
    https://github.com/Scitator/learning-to-move-starter-kit

    Why I am doing this?

    Based on all this I want to boost community activity in this competition and RL overall.
    For more RL news and this competition insights - you can find me by twitter.
    Thanks!

    Sergey Kolesnikov
    @Scitator
    btw, @spMohanty @kidzik is there any docs for each parameter in 97D information dict?
    like, what does "f"/"l"/"v" mean etc?
    SP Mohanty
    @spMohanty
    @SeungmoonS_twitter : Was there a doc describing the observations somewhere ?
    Seungmoon Song
    @SeungmoonS_twitter
    @Scitator Hi, Sergey. f, l and v are muscle force, length and velocity.
    Seungmoon Song
    @SeungmoonS_twitter
    Hopefully, these pages/notes give some explanation.
    Ung Hee Lee
    @unghee_lee_twitter
    I tried to use ddpg train_arm.py example to run the walking task; however, the reward is not increasing (stays at ~6). Has anyone tried this approach and achieved better results?
    Ung Hee Lee
    @unghee_lee_twitter
    I’m new to opensim and trying to model active prosthesis with two joints. @SeungmoonS_twitter could you provide me some pointers where to start from ?
    Seungmoon Song
    @SeungmoonS_twitter
    @unghee_lee_twitter You can check out our last year's osim-rl repository (https://github.com/stanfordnmbl/osim-rl/tree/ver2.1). It is a human model with a passive below-knee prosthesis.
    Ung Hee Lee
    @unghee_lee_twitter
    @SeungmoonS_twitter thank you! I’ll look into it.
    Sergey Kolesnikov
    @Scitator
    Hi, @SeungmoonS_twitter @spMohanty @kidzik
    Is it possible to use osim dict observation, rather than proposed projection?
    Seungmoon Song
    @SeungmoonS_twitter
    Hi @Scitator. The evaluations will be done with the dict observation you get with the project=True and obs_as_dict=True setting, I think is what you are calling the proposed projection. Please let us know if you have any concern with it.
    Sergey Kolesnikov
    @Scitator
    @SeungmoonS_twitter I am just wondering if it is possible not to use project=True during evaluation?
    I have an intuition, that full raw observation could be better for agent training.
    Seungmoon Song
    @SeungmoonS_twitter
    @Scitator Thanks for sharing your thoughts. You are probably right that it is better for training. However, we designed the current observation dict as it seems closer to biological sensory data used in humans and previous studies (e.g. https://youtu.be/ZkOrRcc4dWg) show that it is possible to control locomotion with those data. We will most likely keep the current setting unless there is some biology-based concern or fundamental limitation in training.
    brokenBrain
    @brokenBrain
    I'm having trouble understanding the second submission option. The one without the docker container. Can we submit as many times as we want, and will our score be the max of the submissions? Or will it be an average over a certain number of episodes?
    Seungmoon Song
    @SeungmoonS_twitter
    @brokenBrain The final score is the maximum of all submissions.