Structural implementation of RL key algorithms
This repository contains Reinforcement Learning algorithms which are being used for research activities at Medipixel. The source code will be frequently updated. We are warmly welcoming external contributors! :)
BC agent on LunarLanderContinuous-v2 | RainbowIQN agent on PongNoFrameskip-v4 | SAC agent on Reacher-v2 |
Thanks goes to these wonderful people (emoji key):
Jinwoo Park (Curt) π» |
Kyunghwan Kim π» |
darthegg π» |
Mincheol Kim π» |
κΉλ―Όμ π» |
Leejin Jung π» |
Chris Yoon π» |
Jiseong Han π» |
Sehyun Hwang π§ |
eunjin π» |
This project follows the all-contributors specification.
We have tested each algorithm on some of the following environments.
βPlease note that this won't be frequently updated.
RainbowIQN learns the game incredibly fast! It accomplishes the perfect score (21) within 100 episodes! The idea of RainbowIQN is roughly suggested from W. Dabney et al..
See W&B Log for more details. (The performance is measured on the commit 4248057)
RainbowIQN with ResNet's performance and learning speed were similar to those of RainbowIQN. Also we confirmed that R2D1 (w/ Dueling, PER) converges well in the Pong enviornment, though not as fast as RainbowIQN (in terms of update step).
Although we were only able to test Ape-X DQN (w/ Dueling) with 4 workers due to limitations to computing power, we observed a significant speed-up in carrying out update steps (with batch size 512). Ape-X DQN learns Pong game in about 2 hours, compared to 4 hours for serial Dueling DQN.
See W&B Log for more details. (The performance is measured on the commit 9e897ad)
We used these environments just for a quick verification of each algorithm, so some of experiments may not show the best performance.
See W&B log for more details. (The performance is measured on the commit 9e897ad)
See W&B log for more details. (The performance is measured on the commit 82fae77)
See W&B log for more details. (The performance is measured on the commit 9e897ad)
See W&B log for more details. (The performance is measured on the commit 9e897ad)
See W&B log for more details. (The performance is measured on the commit 9e897ad)
See W&B log for more details. (The performance is measured on the commit 9e897ad)
We reproduced the performance of DDPG, TD3, and SAC on Reacher-v2 (Mujoco). They reach the score around -3.5 to -4.5.
$ conda create -n rl_algorithms python=3.7.9
$ conda activate rl_algorithms
Reacher-v2
), you need to acquire Mujoco license.First, clone the repository.
git clone https://github.com/medipixel/rl_algorithms.git
cd rl_algorithms
Install packages required to execute the code. It includes python setup.py install
. Just type:
make dep
If you want to modify code you should configure formatting and linting settings. It automatically runs formatting and linting when you commit the code. Contrary to make dep
command, it includes python setup.py develop
. Just type:
make dev
After having done make dev
, you can validate the code by the following commands.
make format # for formatting
make test # for linting
You can train or test algorithm
on env_name
if configs/env_name/algorithm.yaml
exists. (configs/env_name/algorithm.yaml
contains hyper-parameters)
python run_env_name.py --cfg-path <config-path>
e.g. running soft actor-critic on LunarLanderContinuous-v2.
python run_lunarlander_continuous_v2.py --cfg-path ./configs/lunarlander_continuous_v2/sac.yaml <other-options>
e.g. running a custom agent, if you have written your own configs: configs/env_name/ddpg-custom.yaml
.
python run_env_name.py --cfg-path ./configs/lunarlander_continuous_v2/ddpg-custom.py
You will see the agent run with hyper parameter and model settings you configured.
In addition, there are various argument settings for running algorithms. If you check the options to run file you should command
python <run-file> -h
--test
--off-render
--log
--seed <int>
--save-period <int>
--max-episode-steps <int>
--episode-num <int>
--render-after <int>
--load-from <save-file-path>
You can show a feature map that the trained agent extract using Grad-CAM(Gradient-weighted Class Activation Mapping) and Saliency map.
Grad-CAM is a way of combining feature maps using the gradient signal, and produce a coarse localization map of the important regions in the image. You can use it by adding Grad-CAM config and --grad-cam
flag when you run. For example:
python run_env_name.py --cfg-path <config-path> --test --grad-cam
The results will be rendered as follows:
You can also use Saliency-map in a similar way to Grad-CAM just by adding --saliency-map
flag. Saliency-map need trained weight carried by --load-from
flag.
python run_env_name.py --cfg-path <config-path> --load-from <save-file-path> --test --saliency-map
Saliency map will be stored in data/saliency_map
Both Grad-CAM and Saliency-map can be only used for the agent that uses convolutional layers like DQN for Pong environment. You can see feature maps of all the configured convolution layers.
We seperate the document about using policy distillation in rl_algorithms/distillation/README.md.
We use W&B for logging of network parameters and others. For logging, please follow the steps below after requirement installation:
- Create a wandb account
- Check your API key in settings, and login wandb on your terminal:
$ wandb login API_KEY
- Initialize wandb:
$ wandb init
For more details, read W&B tutorial.
Class diagram at #135.
βThis won't be frequently updated.
To cite this repository in publications:
@misc{rl_algorithms,
author = {Kim, Kyunghwan and Lee, Chaehyuk and Jeong, Euijin and Han, Jiseong and Kim, Minseop and Yoon, Chris and Kim, Mincheol and Park, Jinwoo},
title = {Medipixel RL algorithms},
year = {2020},
publisher = {Github},
journal = {GitHub repository},
howpublished = {\url{https://github.com/medipixel/rl_algorithms}},
}