Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Wah Loon Keng
    @kengz
    Ok just run train for now to see if you can get an agent solving the problem
    Tom Brander
    @dartdog
    That seems to have worked..:
    `2019_03_13_111448/dqn_cartpole_t0_s2_target_net_optim.pth
    [2019-03-13 11:20:28,381 INFO logger.py info] Session done and closed.
    [2019-03-13 11:20:28,849 INFO analysis.py analyze_trial] Analyzing trial
    [2019-03-13 11:20:28,896 INFO analysis.py calc_trial_fitness_df] Trial mean fitness: 2.435129776597023
    strength speed stability consistency
    0 0.732499 7.449687 0.808333 0.75
    [2019-03-13 11:20:29,238 INFO analysis.py save_trial_data] Saving trial data to data/dqn_cartpole_2019_03_13_111448/dqn_cartpole_t0
    [2019-03-13 11:20:31,027 INFO viz.py save_image] Graph saved to /home/tom/Documents/Pytorchdemo/SLM-Lab/data/dqn_cartpole_2019_03_13_111448/dqn_cartpole_t0_trial_graph.png
    [2019-03-13 11:20:31,059 INFO analysis.py save_trial_data] All trial data zipped to data/dqn_cartpole_2019_03_13_111448.zip
    [2019-03-13 11:20:31,059 INFO logger.py info] Trial done and closed.
    (lab) tom@pop-os:~/Documents/Pytorchdemo/SLM-Lab$
    '
    is thefre anything I can do with the trained agent?
    Wah Loon Keng
    @kengz
    Sounds like something is wrong with your installation
    The errors earlier were actually reporting a problem with network training
    That'll need a closer look to see what's actually wrong...
    A simpler check is to make sure you're on the latest master branch
    Tom Brander
    @dartdog
    even with the apparant completion of the train function?
    I am on master (last change about three weeks ago?)
    Tom Brander
    @dartdog
    So I gather you are coming to SLO? I'll be bringing this machine. :-) (heavy as it is)
    can meet earlier if you care to?
    Wah Loon Keng
    @kengz
    The issue is specific to neural network updates it seems. Somehow weights are not being updated, but we've tested the master branch and it works as expected. So a closer look would help.
    Yep! Sure thing, we'll have some time before it starts. Or if you have another feel free to try it there too
    Tom Brander
    @dartdog
    Using the time to start reading your fine docs! The interactions are so complex between the pieces of the code, I would have no idea how to fix this without an initial working example and or an error message to deal with.. So I'll try to arrive 1/2 hr early..
    Wah Loon Keng
    @kengz

    You're right. Having many dependencies may eventually give a problem like this, and unfortunately that happened. We did a more thorough dependency-rebuilding and found some issues, although couldn't exactly replicate the one you encounter.

    But thanks so much for bringing this up! And sorry for the trouble. I'll try to fix it for you before the talk.

    But if you'd like to try the installation again. Could u remove the conda environment, then in environment.yml delete the line with "unityagents"? Then reinstall the environment and retry the demo?
    Wah Loon Keng
    @kengz
    the changes above have been merged into master just FYI
    Tom Brander
    @dartdog
    Thanks so much I'll try it out..(hopefully) reading the other material..
    FYI, my current environment did not have yarn installed within it,, I did a conda install yarn in the activated lab env and that got more going. It seems to me that the script does not activate the env correctly to install all components in the lab env..
    and that there are more components perhaps that are conda installable within the conda env?
    Tom Brander
    @dartdog
    FWIW I'm using the setup_ubuntu
    Wah Loon Keng
    @kengz
    Yarn is completely optional for our purpose so no worries
    As long as u install the Conda environment that's should be good
    Tom Brander
    @dartdog
    looks like the setup_ubuntu_extra should be executed within the activated lab env?
    Wah Loon Keng
    @kengz
    Oh that was the optional things I pulled out. We don't need that
    Tom Brander
    @dartdog
    ok
    Tom Brander
    @dartdog
    The installation "looked" like it was more successful.. If I wanted to run the 'extra for yarn, can I from within the env? The initial demo yielded
    [2019-03-15 10:47:42,580 INFO analysis.py calc_trial_fitness_df] Trial mean fitness: 1.1490101704353928 strength speed stability consistency 0 0.986015 1.643359 0.966667 1.0 [2019-03-15 10:47:42,666 INFO analysis.py save_trial_data] Saving trial data to data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_t0 [2019-03-15 10:47:42,696 INFO viz.py save_image] Graph saved to /home/tom/Documents/SLM-Lab/data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_t0_trial_graph.png [2019-03-15 10:47:42,697 INFO logger.py info] Trial done and closed.
    is there another step to show game play?
    Wah Loon Keng
    @kengz
    I wouldnt recommend running the extra install since that script might introduce some broken dependencies. Especially the unityagents installation
    Yep demo step 4 shows the example command for "enjoy" mode
    Tom Brander
    @dartdog
    so that is not automaytic?
    Wah Loon Keng
    @kengz
    Seems like it's working now the agent strength is close to 1
    Tom Brander
    @dartdog
    also got this at start [2019-03-15 10:41:39,137 WARNING logger.py <module>] Couldn't import TensorFlow - disabling TensorBoard logging.
    Wah Loon Keng
    @kengz
    It's not. Training jobs are usually scheduled together so we'd pick and choose which one to replay/enjoy
    Tom Brander
    @dartdog
    got it
    Wah Loon Keng
    @kengz
    Ahah might wanna see if u have tensorboard in your env. Just uninstall it if u see it. But that warning is harmless (from some internal dependencies)
    Tom Brander
    @dartdog
    trying enjoy..
    'python run_lab.py data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_spec.json dqn_cartpole enjoy@dqn_cartpole_t1_s0
    gets meFileNotFoundError: /home/tom/Documents/SLM-Lab/data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_t1_spec.json
    ` not sure how to format correctly?
    no tb /tf in env..
    Tom Brander
    @dartdog
    so this
    python run_lab.py data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_spec.json dqn_cartpole enjoy@dqn_cartpole_t0
    went further but terminated:
    ' File "/home/tom/anaconda3/envs/lab/lib/python3.6/site-packages/torch/serialization.py", line 366, in load
    f = open(f, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/home/tom/Documents/SLM-Lab/data/dqn_cartpole_2019_03_15_104139/dqn_cartpole_t0_net_model.pth'
    '
    Wah Loon Keng
    @kengz
    Try adding _s0 after t0 in your command. It uses that name to find the model file
    Zehan Song
    @qazwsx74269
    [warnig viz.py]: failed to generate graph. This info appeared when I ran a demo. Did it mean that I should check orca?
    Sauhaarda Chowdhuri
    @sauhaardac
    How can I go about adding my custom environment for use with SLM lab?
    Ken Otwell
    @OtwellResearch
    Any chance there's a windows install for the Strange Loop Machine? Or any advice gotcha'
    s if I try it myself?
    Wah Loon Keng
    @kengz
    @qazwsx74269 yep, assuming you're running the program with a GUI and not on a headless server. otherwise prepend your command like so xvfb-run -a python run_lab.py ...
    @sauhaardac the best way is to extend the gym interface then register it as under gym, then u can call the environment by just providing the name. Example from Vizdoom https://github.com/kengz/SLM-Lab/blob/master/slm_lab/env/vizdoom/vizdoom_env.py
    @Jumonji no, but some people have tried installing the dependencies directly on Windows but failed to get rendering to work. Another safe option is to just use a VM that runs linux.
    Chris Joubert
    @ChrisJoubert2501
    Hello, I have been exploring the SLM lab for the last 2 weeks, and it's really a great tool.
    I found some small inconsistency in the documentation.
    The "action_policy" setting in the spec file is specified as different things on the following two pages:
    https://slm-lab.gitbook.io/slm-lab/development/algorithms/reinforce - "action_policy string specifying which policy to use to act. For example, "Categorical"..."
    https://slm-lab.gitbook.io/slm-lab/development/algorithms - "specifies how the agent should act. e.g. "epsilon_greedy"..."
    I assume the latter is out of date.
    Is it better to post this kind of thing here, or make an issue on git?
    HenryKautz
    @HenryKautz

    Hi, thanks for SLM-lab! I found a bug in the setup_macOS script due to changes to HomeBrew: the following lines cause an error - the command 'brew cask --version' always fails because 'brew cask' no longer accepts a --version option, and then 'brew tap caskroom/cask' fails because cask is installed by default and it no longer uses caskroom, the error message printed is: 'caskroom/cask was moved. Tap homebrew/cask-cask instead'

    if brew cask --version | grep --quiet "Cask" >/dev/null; then
    true
    else
    brew tap caskroom/cask
    fi

    Azer-ai
    @Azer-ai
    Hi I am new to RL and I found the content very planned. I also found the same bug as @HenryKautz. Any advice to deal with this error is appreciated.