Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Apr 20 05:13
    Ttopiac commented #255
  • Apr 17 23:03
    DarrenRuan commented #255
  • Apr 16 07:58
    Ttopiac opened #255
  • Mar 30 11:53
    kibamin opened #254
  • Mar 20 20:27
    Veverest opened #253
  • Nov 14 2019 14:02
    morantumi edited #252
  • Nov 14 2019 14:01
    morantumi commented #252
  • Nov 14 2019 07:26
    haanvid closed #216
  • Nov 14 2019 07:26
    haanvid commented #216
  • Nov 13 2019 16:28
    ryanjulian commented #216
  • Nov 13 2019 14:03
    HuangJiaLian commented #216
  • Nov 13 2019 04:25
    morantumi opened #252
  • Oct 28 2019 11:18
    wjwwdb commented #251
  • Aug 14 2019 07:34
    hdzhangbao opened #251
  • Jul 22 2019 19:25
    nebneBgnahZ commented #212
  • Jul 22 2019 19:25
    nebneBgnahZ commented #212
  • Jun 22 2019 11:02
    MoritzTaylor commented #194
  • Jun 21 2019 11:01
    MoritzTaylor commented #194
  • Jun 21 2019 10:59
    MoritzTaylor commented #194
  • Jun 05 2019 12:05
    yangkehan closed #250
Sam Stites
same goes for parameters, however those have some rough guidelines you can follow (eg “discount if you are dealing with an infinite horizon problem”)
also start simple, but it looks like you are already doing that with VPG, and start moving towards things like actor-critic after you feel comfortable
Maksim Kretov
I tried to launch RLLAB with docker: build image from docker/dockerfile. But when I try to run python files with corresponding containers built from that image, I get the error that module rllab was not found. Am I missing something?
Maksim Kretov
I am. it is a feature) Thanks
Maksim Kretov
Can anyone share an example how to open .pkl file saved by RLLAB's lite experiment?
One important question I want to ask quite long time. In deep rl, should we normalized the returns/advantages?
James Arambam

Does anybody had this issue :

unable to create atari enivronment in rllab3 environment (in macosx):
env = GymEnv('Pong-v0')
env = gym.make('Pong-v0')

Error :
Referenced from: /Users/james/anaconda/envs/rllab3/lib/python3.5/site-packages/atari_py/ale_interface/build/
Expected in: /usr/lib/libstdc++.6.dylib
in /Users/james/anaconda/envs/rllab3/lib/python3.5/site-packages/atari_py/ale_interface/build/

Hi guys
Could you please explain me what does the batch_size parameter mean?
batch_size is the total number of (state-action) samples in a batch
for example, if your max_path_length is 50 and batch_size = 500, you could get 10 trajectories each of which has the length 50
is there anyone knowing where is REINFORCE implemented in rllab?
Do you use python?
thanks. one more question, is there any implementation for actor critic with compatible value function in rllab?
hi, has anyone tried to use batch normalization or weight normalization in rllab?
is there anyone here know if there is any implementation for Adaptive normalization with Pop-Art
I got the error [Errno 7] Argument list too long when trying to run with cloudpickle in rllab
can someone here help me?
I have been having some trouble with installing rllab, i get the same error as @yusenzhan. I have put the full log in this gist.
I installed Theano, Lasagne and Plotly manually using pip install git+<line from environment.yml>. A lot of other packages were not installed because of atari-py, running line 354 from the error log without atari-py using pip install <all packages without atari-py> did the job for those packages.
Now it does run most examples, but still a lot of errors appear when I am trying to run an example, mainly from cuda_ndarray from theano and other theano modules and for instance plot=True does not work.
Maybe it has something to do with my system (added it to top of gist), but it looks like the express installation is broken. Help with installing it correctly would be great.
See bottom of gist for error when running with plot=True
Mircea Mironenco
Hi, any idea why was removed?
Everett Palmer
I am trying to get rllab to work on ubuntu. I have aconda2, ... installed. When I run python I get the following error:
from theano.tensor.signal import downsample
ImportError: cannot import name 'downsample'
I can not find in theano.tensor.signal. Sorry about the extra posts.
John Liu
@everett780 what version of theano do u have installed? The downsample module has been moved to theano.tensor.nnet.signal.pool. Dependencies are a pain to maintain - u might be better off reinstalling the latest versions.
Everett Palmer
Hello John, Thanks for the help. My version on Theano is 0.9.0rc4 (March 13, 2017). I dowlloaded it from
The command 'locate downsample' returned nothing.
John Liu
@everett780 I suspect you have an older & incompatible verson of Lasagne. Try upgrading with this command:
pip install --upgrade
Everett Palmer

Hello John,
I have upgraded theano and Lasagne.
Now if I execute 'locate downsample'. It returns: /usr/local/lib/python2.7/dist-packages/theano/tensor/signal/
This is better but it seems odd that it is under python2.7 and not python3.5.

When I execute 'python examples/'. I get an import error:
File "/home/epalmer/rllab-master/rllab/misc/", line 4, in <module>
import theano.tensor.nnet
ImportError: No module named 'theano'

I put some terminal output at this Gist.
Thanks for any help sorting this out,

Liyue Shen (Shirley)
Hello all!
This is my first time here, and I am not sure if this is the right place to ask the question about rllab codebase. (I am sorry if I did anything wrong)
So I want to ask for help about using mujoco environment within rllab codebase. I think the codebase is compatible with MuJoCo Pro 1.31, but when I install MuJoCo Pro 1.31, it cannot work with the "ERROR: Could not open disk". I finally find that this is because "The license manager in MuJoCo Pro 1.40 and earlier does not work with newer Macs that have NVMe disks. This was fixed in MuJoCo 1.50." from the MuJoCo community. I think this means that I cannot work with MuJoCo Pro 1.31 on newer Macs. But after I install MuJoCo 1.50, I find the rllab codebase seems not compatible with MuJoCo 1.50?
So I wonder if there is any solution to make the MuJoCo 1.50 work well with rllab codebase?
Everett Palmer
@johnliu I have been unable to get rllab running on ubuntu. Is it correct to use anaconda2 and th Python3? This seems odd but it is what the instructions say to do.
tensorflow is getting more support on parallelization (like A3C algorithm). Tensorflow implementation of rllab is in still the sandbox. Anyone knows how reliable this sandbox tensorflow in rllab is and whether it is flexible to implement something like A3C there?
Hey @everett780 , are you still stuck with installation? Since I've just gone through the process of installing it, I might be able to help?
Everett Palmer
Hi @rabenimmermehr, With some help from @johncliu I did finally get rllab to run. I am running Ubuntu 16.04.1 under Oracle VM VirtualBox on Dell Windows 7 and 10 machines.
This is what I did to get the rllab code running. I installed Anaconda3 instead of Anaconda2. I generally followed the manual install instructions but in the file "environment.yml" I removed the line containing "mujoca_py". In the Ubuntu "System Settings" panel "Software & Updates", "Ubuntu Software tab panel, I checked the box to enable "Source Code" to be downloadable. Thanks for your offer to help. Everett
Guofei Xiang

Hi, I'am trying to get rllab to work on my ubuntu 16.04.2, with Anaconda2 installed. I following the manual installation instructions. But everytime I an running the "environment.yml" file, when it's running to
"Building wheels for collected packages: chainer"
It always comes out the error:
" Running bdist_wheel for chainer ... error
Complete output from command /home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/';f=getattr(tokenize, 'open', open)(file);'\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /tmp/tmpntkbj9mmpip-wheel- --python-tag cp35:"

and then the installation will fail, and it comes out the error:
" ^
error: command 'gcc' failed with exit status 1

Failed building wheel for chainer
Running clean for chainer
Failed to build chainer"

And finally it comes out:
" b"Compiling /tmp/pip-build-nfrv0o8e/chainer/cupy/core/core.pyx\n\nError compiling Cython file:\n------------------------------------------------------------\n...\n void data\n int size\n int shape_and_strides[MAX_NDIM 2]\n\n\ncdef class CArray(cupy.cuda.function.CPointer):\n ^\n------------------------------------------------------------\n\ncupy/core/carray.pxi:14:5: 'cupy' is not declared\n"


Command "/home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/';f=getattr(tokenize, 'open', open)(file);'\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-nejoplu3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-nfrv0o8e/chainer/

CondaValueError: pip returned an error."

I've commentted the line "- mujoco_py".

I put some ternimal output at this Gist.

@everett780 @rabenimmermehr @johncliu Could you help me , thank you so much!
Everett Palmer
Hello @SJTUGuofei I don't think I can help you but when I got it to work I did a few things differently. I used anaconda3 and python 3 and in the Ubuntu "System Settings" panel "Software & Updates", "Ubuntu Software tab panel, I checked the box to enable "Source Code" to be downloadable. Until I did this I was getting 'gcc' errors.
Guofei Xiang
Thank you all the same and best wishes! @everett780

Hi, I have a question about rllab Mujoco environments. Do these differ from the Gym Mujoco environments? In other words, how are the following lines different in rllab?

  1. env = Walker2DEnv()
  2. env = GymEnv("Walker2d-v1")

Looking briefly at the code, the step reward seems to be different. Is one environment more difficult to learn than the other?

Ali Nehrani

Hello all,
I installed rllab in python 2.7 and defined the PYTHONPATH and successfully create the virtual env.
(rllab) bayes@b-13:~$
I can import rllab and some other modules like rllab.algos but the problems is that unlike what was said in the installation instruction, there is no "TRPO" in the algos module. In more detail:
In [5]: dir(rllab.algos)
as it is seen there is no TRPO

I have same problem with some other modules too.
Can anyone help me to solve the problem?

Ali Nehrani
Also I checked the folder "/trpo" in" /rllab/algos" and I found the "" file there!
Ali Nehrani

Let me add that during installation the following error appeared:

Error [Errno 2] No such file or directory while executing command git clone -q /tmp/pip-5dnToO-build
Cannot find command 'git'

your comments for solving this error is also helpful.

Ali Nehrani
I solved this error and now the following error appears when trying to run first program in rllab:
(rllab) bayes@bayes-13:~/Academic$ python
Traceback (most recent call last):
File "", line 1, in <module>
from rllab.algos.trpo import TRPO
ImportError: No module named rllab.algos.trpo
Sandeep R Venkatesh
(rllabenv) $ python
ERROR: Could not open disk Press Enter to exit ...
When running rllab on Mac, Mujoco (MjPro version 1.31) seems to have this issue. Although, the original issue within Mujoco is fixed in 1.50, I'm not sure how to move forward with my current setup of rllab, Any directions would be appreciated.

Hi foks, I'm fairly new to the RL thing and I'm trying to wrap my head around some stuff, so bare with me if my questions sound stupid. I have stumbled upon some stuff and I'm trying to pro grammatically implement them. For instance given and an environment like the openai/gym, let's say for the sake of the argument we pick the cart pole. First is there any possibility I can get the goal state?, so basically I want the distance of the end effector of the cart pole and the goal. Second, how can I get the edge of the workspace, what I mean is that I'm trying to determine the distance between the goal and the edge of the workspace?

Again apologies for my stupid questions and thank you in advance for any possible help.

Haitao XU
Hi all. Can I ask a question here? I installed my cuda-toolkit and cudnn when I install tensorflow-gpu, but my rllab theano cannot find the cudnn.h file, even when I set the LD_.._path to this file. Should I have to install cuda and cudnn from nvidia official package? Thanks!!
Chelsea Sidrane
@dementrock Can you explain to me what the intention behind the ParamLayer class is? I'm confused about what's happening and why