DQN Atari Save

Deep Q-Learning (DQN) implementation for Atari pong.

Project README

DQN-Atari

Deep Q-network implementation for Pong-vo. The implementation follows from the paper - Playing Atari with Deep Reinforcement Learning and Human-level control through deep reinforcement learning.

Results

Video of Gameplay - DQN Nature Paper

DQN Video

Reward per Episode

Rewards Per Episode

Summary of Implementation

DQN Nature Architecture Implementation

  • Input : 84 × 84 × 4 image (using the last 4 frames of a history)
  • Conv Layer 1 : 32 8 × 8 filters with stride 4
  • Conv Layer 2: 64 4 × 4 filters with stride 2
  • Conv Layer 3: 64 3 × 3 filters with stride 1
  • Fully Connected 1 : fully-connected and consists of 256 rectifier units
  • Output : fully connected linear layer with a single output for each valid action.

DQN Neurips Architecture Implementation

  • Input : 84 × 84 × 4 image (using the last 4 frames of a history)
  • Conv Layer 1 : 16 8 × 8 filters with stride 4
  • Conv Layer 2: 32 4 × 4 filters with stride 2
  • Fully Connected 1 : fully-connected and consists of 256 rectifier units
  • Output : fully connected linear layer with a single output for each valid action.

Other Params

  • Optimizer: RMSProp
  • Batch Size: 32
  • E-greedy : 0.1

How to run

Create a new environment

Example:

conda create -n dqn_pong

Install Dependencies

pip install -r requirements.txt

To use gym.wrappers.Monitor to record the last episode

sudo apt-get install ffmpeg

Run Training from Scratch

python train_atari.py

Use a trained agent

python train_atari.py --load-checkpoint-file results/checkpoint_dqn_nature.pth

View Progress

A video is recorded every 50 episodes. See videos in /video/ folder.

Open Source Agenda is not affiliated with "DQN Atari" Project. README Source: KaleabTessera/DQN-Atari
Stars
66
Open Issues
4
Last Commit
1 year ago

Open Source Agenda Badge

Open Source Agenda Rating