<Sameroom> Your Portal URL is https://sameroom.io/DnTuzTqM -- you can send the URL to someone on a different team to share this room. Note: you can connect more than two teams this way.
<Sameroom> I've connected 1 new room#rl (chainer) on Slack. See map
[not, chainer]@peisuke ここは英語用なので日本語で質問したいならchainer-jpで聞いた方が良いですよ Because here is for English, you had better ask at chainer-jp slack if you want to ask in Japanese.
[peisuke, chainer] tnx, I posted question in Japanese. My question is, i want to pass two or more state variables to the act function. For example, image and sensor data. Is it possible?
Sameroom
@sameroom-bot
[Yasuhiro Fujita, chainer] You mean the act method of ChainerRL agents?
Sameroom
@sameroom-bot
[Bernard Lee, chainer] Hello, just wondering if anyone here had any experience using chainerrl's ddqn agent?
Sameroom
@sameroom-bot
[Yasuhiro Fujita, chainer] you mean DoubleDQN?
Sameroom
@sameroom-bot
[Bernard Lee, chainer] Yes, the double dqn
[Bernard Lee, chainer] I'm having a CNN as my Q-function and I seem to always have the problem where the Q-values explodes to very high once experience replay starts and then it slowly decreases back down.
Sameroom
@sameroom-bot
[Yasuhiro Fujita, chainer] DQN families are not stable algorithms and need careful tuning. At least I recommend using the same settings as DeepMind's Nature paper: scaling of the input value into [0-1] and reward [-1,1] as well as using the exact same hyperparameter settings.