DQN-Atari-Agents: Modularized & Parallel PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow, and DRQN
Modularized training of different DQN Algorithms.
This repository contains several Add-ons to the base DQN Algorithm. All versions can be trained from one script and include the option to train from raw pixel or ram digit data. Recently added multiprocessing to run several environments in parallel for faster training.
Following DQN versions are included:
Both can be enhanced with Noisy layer, Per (Prioritized Experience Replay), Multistep Targets and be trained in a Categorical version (C51). Combining all these add-ons will lead to the state-of-the-art Algorithm of value-based methods called: Rainbow.
Trained and tested on:
Python 3.6 PyTorch 1.4.0 Numpy 1.15.2 gym 0.10.11
To train the base DDQN simply run python run_atari_dqn.py
To train and modify your own Atari Agent the following inputs are optional:
example: python run_atari_dqn.py -env BreakoutNoFrameskip-v4 -agent dueling -u 1 -eps_frames 100000 -seed 42 -info Breakout_run1
dqn
, dqn+per
, noisy_dqn
, noisy_dqn+per
, dueling
, dueling+per
, noisy_dueling
, noisy_dueling+per
, c51
, c51+per
, noisy_c51
, noisy_c51+per
, duelingc51
, duelingc51+per
, noisy_duelingc51
, noisy_duelingc51+per
, rainbow
Just run tensorboard --logdir=runs/
Hyperparameters:
Since training for the Algorithms for Atari takes a lot of time I added a quick convergence prove for the CartPole-v0 environment. You can clearly see that Raibow outperformes the other two methods Dueling DQN and DDQN.
To reproduce the results following hyperparameter where used:
Its interesting to see that the add-ons have a negative impact for the super simple CartPole environment. Still the Dueling DDQN version performs clearly better than the standard DDQN version.
To reduce wall clock time while training parallel environments are implemented. Following diagrams show the speed improvement for the two envrionments CartPole-v0 and LunarLander-v2. Tested with 1,2,4,6,8,10,16 worker. Each number of worker was tested over 3 seeds.
Convergence behavior for each worker number can be found: CartPole-v0 and LunarLander
Im open for feedback, found bugs, improvements or anything. Just leave me a message or contact me.
Feel free to use this code for your own projects or research. For citation:
@misc{DQN-Atari-Agents,
author = {Dittert, Sebastian},
title = {DQN-Atari-Agents: Modularized PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow and DRQN},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/BY571/DQN-Atari-Agents}},
}