A high-performance Atari A3C agent in 180 lines of PyTorch
Sam Greydanus | October 2017 | MIT License
Results after training on 40M frames:
If you're working on OpenAI's Breakout-v4 environment:
python baby-a3c.py --env Breakout-v4
python baby-a3c.py --env Breakout-v4 --test True
python baby-a3c.py --env Breakout-v4 --render True
Make things as simple as possible, but not simpler.
Frustrated by the number of deep RL implementations that are clunky and opaque? In this repo, I've stripped a high-performance A3C model down to its bare essentials. Everything you'll need is contained in 180 lines...
Breakout-v4 | Pong-v4 | SpaceInvaders-v4 | |
---|---|---|---|
*Mean episode rewards @ 40M frames | 140 ± 20 | 18.2 ± 1 | 470 ± 30 |
*Mean episode rewards @ 80M frames | 190 ± 20 | 17.9 ± 1 | 550 ± 30 |
*same (default) hyperparameters across all environments
self.conv1 = nn.Conv2d(channels, 32, 3, stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.gru = nn.GRUCell(32 * 5 * 5, memsize) # *see below
self.critic_linear, self.actor_linear = nn.Linear(memsize, 1), nn.Linear(memsize, num_actions)
*we use a GRU cell because it has fewer params, uses one memory vector instead of two, and attains the same performance as an LSTM cell.
(Use pip freeze
to check your environment settings)