Does anybody had this issue :
unable to create atari enivronment in rllab3 environment (in macosx):
env = GymEnv('Pong-v0')
env = gym.make('Pong-v0')
Referenced from: /Users/james/anaconda/envs/rllab3/lib/python3.5/site-packages/atari_py/ale_interface/build/libale_c.so
Expected in: /usr/lib/libstdc++.6.dylib
pip install git+<line from environment.yml>. A lot of other packages were not installed because of atari-py, running line 354 from the error log without atari-py using
pip install <all packages without atari-py>did the job for those packages.
plot=Truedoes not work.
I have upgraded theano and Lasagne.
Now if I execute 'locate downsample'. It returns: /usr/local/lib/python2.7/dist-packages/theano/tensor/signal/downsample.py
This is better but it seems odd that it is under python2.7 and not python3.5.
When I execute 'python examples/trpo_cartpole.py'. I get an import error:
File "/home/epalmer/rllab-master/rllab/misc/special.py", line 4, in <module>
ImportError: No module named 'theano'
I put some terminal output at this Gist. https://gist.github.com/everett780/44c021d8ef20483875631efcefe429be
Thanks for any help sorting this out,
Hi, I'am trying to get rllab to work on my ubuntu 16.04.2, with Anaconda2 installed. I following the manual installation instructions. But everytime I an running the "environment.yml" file, when it's running to
"Building wheels for collected packages: chainer"
It always comes out the error:
" Running setup.py bdist_wheel for chainer ... error
Complete output from command /home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /tmp/tmpntkbj9mmpip-wheel- --python-tag cp35:"
and then the installation will fail, and it comes out the error:
error: command 'gcc' failed with exit status 1
Failed building wheel for chainer
Running setup.py clean for chainer
Failed to build chainer"
And finally it comes out:
" b"Compiling /tmp/pip-build-nfrv0o8e/chainer/cupy/core/core.pyx\n\nError compiling Cython file:\n------------------------------------------------------------\n...\n void data\n int size\n int shape_and_strides[MAX_NDIM 2]\n\n\ncdef class CArray(cupy.cuda.function.CPointer):\n ^\n------------------------------------------------------------\n\ncupy/core/carray.pxi:14:5: 'cupy' is not declared\n"
Command "/home/figo/anaconda2/envs/rllab3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-nfrv0o8e/chainer/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-nejoplu3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-nfrv0o8e/chainer/
CondaValueError: pip returned an error."
I've commentted the line "- mujoco_py".
I put some ternimal output at this Gist.
Hi, I have a question about rllab Mujoco environments. Do these differ from the Gym Mujoco environments? In other words, how are the following lines different in rllab?
Looking briefly at the code, the step reward seems to be different. Is one environment more difficult to learn than the other?
I installed rllab in python 2.7 and defined the PYTHONPATH and successfully create the virtual env.
I can import rllab and some other modules like rllab.algos but the problems is that unlike what was said in the installation instruction, there is no "TRPO" in the algos module. In more detail:
In : dir(rllab.algos)
as it is seen there is no TRPO
I have same problem with some other modules too.
Can anyone help me to solve the problem?
Let me add that during installation the following error appeared:
Error [Errno 2] No such file or directory while executing command git clone -q https://github.com/openai/gym.git /tmp/pip-5dnToO-build
Cannot find command 'git'
your comments for solving this error is also helpful.
(rllabenv) $ python trpo_swimmer.py
ERROR: Could not open disk Press Enter to exit ...
Hi foks, I'm fairly new to the RL thing and I'm trying to wrap my head around some stuff, so bare with me if my questions sound stupid. I have stumbled upon some stuff and I'm trying to pro grammatically implement them. For instance given and an environment like the openai/gym, let's say for the sake of the argument we pick the cart pole. First is there any possibility I can get the goal state?, so basically I want the distance of the end effector of the cart pole and the goal. Second, how can I get the edge of the workspace, what I mean is that I'm trying to determine the distance between the goal and the edge of the workspace?
Again apologies for my stupid questions and thank you in advance for any possible help.