by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 20 05:13
    Ttopiac commented #255
  • Apr 17 23:03
    DarrenRuan commented #255
  • Apr 16 07:58
    Ttopiac opened #255
  • Mar 30 11:53
    kibamin opened #254
  • Mar 20 20:27
    Veverest opened #253
  • Nov 14 2019 14:02
    morantumi edited #252
  • Nov 14 2019 14:01
    morantumi commented #252
  • Nov 14 2019 07:26
    haanvid closed #216
  • Nov 14 2019 07:26
    haanvid commented #216
  • Nov 13 2019 16:28
    ryanjulian commented #216
  • Nov 13 2019 14:03
    HuangJiaLian commented #216
  • Nov 13 2019 04:25
    morantumi opened #252
  • Oct 28 2019 11:18
    wjwwdb commented #251
  • Aug 14 2019 07:34
    hdzhangbao opened #251
  • Jul 22 2019 19:25
    nebneBgnahZ commented #212
  • Jul 22 2019 19:25
    nebneBgnahZ commented #212
  • Jun 22 2019 11:02
    MoritzTaylor commented #194
  • Jun 21 2019 11:01
    MoritzTaylor commented #194
  • Jun 21 2019 10:59
    MoritzTaylor commented #194
  • Jun 05 2019 12:05
    yangkehan closed #250
Sam Stites
@stites
same goes for parameters, however those have some rough guidelines you can follow (eg “discount if you are dealing with an infinite horizon problem”)
also start simple, but it looks like you are already doing that with VPG, and start moving towards things like actor-critic after you feel comfortable
Maksim Kretov
@dd210
I tried to launch RLLAB with docker: build image from docker/dockerfile. But when I try to run python files with corresponding containers built from that image, I get the error that module rllab was not found. Am I missing something?
Maksim Kretov
@dd210
I am. it is a feature) Thanks
Maksim Kretov
@dd210
Can anyone share an example how to open .pkl file saved by RLLAB's lite experiment?
neverdie88
@neverdie88
One important question I want to ask quite long time. In deep rl, should we normalized the returns/advantages?
James Arambam
@jamesarambam

Does anybody had this issue :

unable to create atari enivronment in rllab3 environment (in macosx):
env = GymEnv('Pong-v0')
or
env = gym.make('Pong-v0')

Error :
Referenced from: /Users/james/anaconda/envs/rllab3/lib/python3.5/site-packages/atari_py/ale_interface/build/libale_c.so
Expected in: /usr/lib/libstdc++.6.dylib
in /Users/james/anaconda/envs/rllab3/lib/python3.5/site-packages/atari_py/ale_interface/build/libale_c.so

joistick11
@joistick11
Hi guys
Could you please explain me what does the batch_size parameter mean?
neverdie88
@neverdie88
batch_size is the total number of (state-action) samples in a batch
for example, if your max_path_length is 50 and batch_size = 500, you could get 10 trajectories each of which has the length 50
neverdie88
@neverdie88
is there anyone knowing where is REINFORCE implemented in rllab?
joistick11
@joistick11
Do you use python?
neverdie88
@neverdie88
yes
neverdie88
@neverdie88
thanks. one more question, is there any implementation for actor critic with compatible value function in rllab?
neverdie88
@neverdie88
hi, has anyone tried to use batch normalization or weight normalization in rllab?
neverdie88
@neverdie88
is there anyone here know if there is any implementation for Adaptive normalization with Pop-Art
neverdie88
@neverdie88
I got the error [Errno 7] Argument list too long when trying to run with cloudpickle in rllab
can someone here help me?
thanks
BartKeulen
@BartKeulen
Hey,
I have been having some trouble with installing rllab, i get the same error as @yusenzhan. I have put the full log in this gist.
I installed Theano, Lasagne and Plotly manually using pip install git+<line from environment.yml>. A lot of other packages were not installed because of atari-py, running line 354 from the error log without atari-py using pip install <all packages without atari-py> did the job for those packages.
Now it does run most examples, but still a lot of errors appear when I am trying to run an example, mainly from cuda_ndarray from theano and other theano modules and for instance plot=True does not work.
Maybe it has something to do with my system (added it to top of gist), but it looks like the express installation is broken. Help with installing it correctly would be great.
BartKeulen
@BartKeulen
See bottom of gist for error when running with plot=True
Mircea Mironenco
@mirceamironenco
Hi, any idea why https://gym.openai.com/docs/rl#policy-gradients was removed?
Everett Palmer
@everett780
I am trying to get rllab to work on ubuntu. I have aconda2, ... installed. When I run python trpo_cartpole.py I get the following error:
from theano.tensor.signal import downsample
ImportError: cannot import name 'downsample'
I can not find downsample.py in theano.tensor.signal. Sorry about the extra posts.
John Liu
@johncliu
@everett780 what version of theano do u have installed? The downsample module has been moved to theano.tensor.nnet.signal.pool. Dependencies are a pain to maintain - u might be better off reinstalling the latest versions.
Everett Palmer
@everett780
Hello John, Thanks for the help. My version on Theano is 0.9.0rc4 (March 13, 2017). I dowlloaded it from githib.com/Theano.
The command 'locate downsample' returned nothing.
John Liu
@johncliu
@everett780 I suspect you have an older & incompatible verson of Lasagne. Try upgrading with this command:
pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip
Everett Palmer
@everett780

Hello John,
I have upgraded theano and Lasagne.
Now if I execute 'locate downsample'. It returns: /usr/local/lib/python2.7/dist-packages/theano/tensor/signal/downsample.py
This is better but it seems odd that it is under python2.7 and not python3.5.

When I execute 'python examples/trpo_cartpole.py'. I get an import error:
File "/home/epalmer/rllab-master/rllab/misc/special.py", line 4, in <module>
import theano.tensor.nnet
ImportError: No module named 'theano'

I put some terminal output at this Gist. https://gist.github.com/everett780/44c021d8ef20483875631efcefe429be
Thanks for any help sorting this out,
Everett

Liyue Shen (Shirley)
@liyues
Hello all!
This is my first time here, and I am not sure if this is the right place to ask the question about rllab codebase. (I am sorry if I did anything wrong)
So I want to ask for help about using mujoco environment within rllab codebase. I think the codebase is compatible with MuJoCo Pro 1.31, but when I install MuJoCo Pro 1.31, it cannot work with the "ERROR: Could not open disk". I finally find that this is because "The license manager in MuJoCo Pro 1.40 and earlier does not work with newer Macs that have NVMe disks. This was fixed in MuJoCo 1.50." from the MuJoCo community. I think this means that I cannot work with MuJoCo Pro 1.31 on newer Macs. But after I install MuJoCo 1.50, I find the rllab codebase seems not compatible with MuJoCo 1.50?
So I wonder if there is any solution to make the MuJoCo 1.50 work well with rllab codebase?
Everett Palmer
@everett780
@johnliu I have been unable to get rllab running on ubuntu. Is it correct to use anaconda2 and th Python3? This seems odd but it is what the instructions say to do.
neverdie88
@neverdie88
tensorflow is getting more support on parallelization (like A3C algorithm). Tensorflow implementation of rllab is in still the sandbox. Anyone knows how reliable this sandbox tensorflow in rllab is and whether it is flexible to implement something like A3C there?
rabenimmermehr
@rabenimmermehr
Hey @everett780 , are you still stuck with installation? Since I've just gone through the process of installing it, I might be able to help?
Everett Palmer
@everett780
Hi @rabenimmermehr, With some help from @johncliu I did finally get rllab to run. I am running Ubuntu 16.04.1 under Oracle VM VirtualBox on Dell Windows 7 and 10 machines.
This is what I did to get the rllab code running. I installed Anaconda3 instead of Anaconda2. I generally followed the manual install instructions but in the file "environment.yml" I removed the line containing "mujoca_py". In the Ubuntu "System Settings" panel "Software & Updates", "Ubuntu Software tab panel, I checked the box to enable "Source Code" to be downloadable. Thanks for your offer to help. Everett
Guofei Xiang
@SJTUGuofei

Hi, I'am trying to get rllab to work on my ubuntu 16.04.2, with Anaconda2 installed. I following the manual installation instructions. But everytime I an running the "environment.yml" file, when it's running to
"Building wheels for collected packages: chainer"
It always comes out the error:
" Running setup.py bdist_wheel for chainer ... error
Complete output from command /home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /tmp/tmpntkbj9mmpip-wheel- --python-tag cp35:"

and then the installation will fail, and it comes out the error:
" ^
error: command 'gcc' failed with exit status 1


Failed building wheel for chainer
Running setup.py clean for chainer
Failed to build chainer"

And finally it comes out:
" b"Compiling /tmp/pip-build-nfrv0o8e/chainer/cupy/core/core.pyx\n\nError compiling Cython file:\n------------------------------------------------------------\n...\n void data\n int size\n int shape_and_strides[MAX_NDIM 2]\n\n\ncdef class CArray(cupy.cuda.function.CPointer):\n ^\n------------------------------------------------------------\n\ncupy/core/carray.pxi:14:5: 'cupy' is not declared\n"

----------------------------------------

Command "/home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-nejoplu3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-nfrv0o8e/chainer/

CondaValueError: pip returned an error."

I've commentted the line "- mujoco_py".

I put some ternimal output at this Gist.
https://gist.github.com/SJTUGuofei/02165b1d292289f70c639c375bc43779

@everett780 @rabenimmermehr @johncliu Could you help me , thank you so much!
Everett Palmer
@everett780
Hello @SJTUGuofei I don't think I can help you but when I got it to work I did a few things differently. I used anaconda3 and python 3 and in the Ubuntu "System Settings" panel "Software & Updates", "Ubuntu Software tab panel, I checked the box to enable "Source Code" to be downloadable. Until I did this I was getting 'gcc' errors.
Guofei Xiang
@SJTUGuofei
Thank you all the same and best wishes! @everett780
tgangwani
@tgangwani

Hi, I have a question about rllab Mujoco environments. Do these differ from the Gym Mujoco environments? In other words, how are the following lines different in rllab?

  1. env = Walker2DEnv()
  2. env = GymEnv("Walker2d-v1")

Looking briefly at the code, the step reward seems to be different. Is one environment more difficult to learn than the other?

Ali Nehrani
@schliffen

Hello all,
I installed rllab in python 2.7 and defined the PYTHONPATH and successfully create the virtual env.
(rllab) bayes@b-13:~$
I can import rllab and some other modules like rllab.algos but the problems is that unlike what was said in the installation instruction, there is no "TRPO" in the algos module. In more detail:
In [5]: dir(rllab.algos)
Out[5]:
['builtins',
'doc',
'file',
'name',
'package',
'path',
'base']
as it is seen there is no TRPO

I have same problem with some other modules too.
Can anyone help me to solve the problem?

Ali Nehrani
@schliffen
Also I checked the folder "/trpo" in" /rllab/algos" and I found the "trpo.py" file there!
Ali Nehrani
@schliffen

Let me add that during installation the following error appeared:

Error [Errno 2] No such file or directory while executing command git clone -q https://github.com/openai/gym.git /tmp/pip-5dnToO-build
Cannot find command 'git'

your comments for solving this error is also helpful.

Ali Nehrani
@schliffen
I solved this error and now the following error appears when trying to run first program in rllab:
(rllab) bayes@bayes-13:~/Academic$ python rllab-1.py
Traceback (most recent call last):
File "rllab-1.py", line 1, in <module>
from rllab.algos.trpo import TRPO
ImportError: No module named rllab.algos.trpo
Sandeep R Venkatesh
@rvsandeep
(rllabenv) $ python trpo_swimmer.py
ERROR: Could not open disk Press Enter to exit ...
When running rllab on Mac, Mujoco (MjPro version 1.31) seems to have this issue. Although, the original issue within Mujoco is fixed in 1.50, I'm not sure how to move forward with my current setup of rllab, Any directions would be appreciated.
kirk86
@kirk86

Hi foks, I'm fairly new to the RL thing and I'm trying to wrap my head around some stuff, so bare with me if my questions sound stupid. I have stumbled upon some stuff and I'm trying to pro grammatically implement them. For instance given and an environment like the openai/gym, let's say for the sake of the argument we pick the cart pole. First is there any possibility I can get the goal state?, so basically I want the distance of the end effector of the cart pole and the goal. Second, how can I get the edge of the workspace, what I mean is that I'm trying to determine the distance between the goal and the edge of the workspace?

Again apologies for my stupid questions and thank you in advance for any possible help.

Haitao XU
@xht033
Hi all. Can I ask a question here? I installed my cuda-toolkit and cudnn when I install tensorflow-gpu, but my rllab theano cannot find the cudnn.h file, even when I set the LD_.._path to this file. Should I have to install cuda and cudnn from nvidia official package? Thanks!!
Chelsea Sidrane
@chelseas
@dementrock Can you explain to me what the intention behind the ParamLayer class is? I'm confused about what's happening and why