PARL Versions Save

A high-performance distributed training framework for Reinforcement Learning

v2.2

1 year ago

New Features

  • Support GPU cluster for XPARL Parallel training. For more details, please look for the latest docs

v2.1

1 year ago

Framework

  • add agent.train()/eval()
  • fix some bugs of DDQN
  • add ComptWrapper (Cpmpatible for different versions of gym and latest verion of mujoco) Parallel Training
  • support xparl in notebook
  • add XPARL_PYTHON environment variable

Example

  • add Paddle examples
    • PPO
    • MADDPG
    • ES
    • CQL
    • IMPALA
    • Baseline
      • GridDispatching Competition
      • Halite Competition
  • add PPO、MADDPG、ES、CQL、IQL、Decision Transformer、MAPPO、MAML++ in benchmark

Tutorial

  • add dygraph+parl2.0+paddle2.0 version of tutorials code for bilibili course
  • add dependency version constraints to tutorials

v2.0.0

2 years ago

Framework

  • Support PaddlePaddle 2.0 (dynamic graph mode) by default
  • Add integration testing for Windows

Parallel Training

  • Refactor the heartbeat mechanism of the xparl module
  • Use the synchronous xparl API for some distributed algorithms

Documentation

  • Add Chinese documentation @readthedocs

Example

  • Paddle
    • Policy Gradient
    • DDPG
    • DQN/Double DQN/ Dueling DQN
    • SAC
    • TD3
    • OAC
    • QMIX
    • A2C
    • AlphaZero
  • Fluid
    • QMIX

Application

  • Self-driving system in CARLA simulator

v1.4

3 years ago

Framework

  • support the latest API of dynamic graph mode in PaddlePaddle
  • support VisuaIDL visualization tool
  • optimize compatibility under different systems

Parallel Training

  • add monitoring page of the task output log
  • support direct access and modification of attributes of remote objects
  • support asynchronous function call in remote objects

Example

  • add Prioritized DQN algorithm
  • add AlphaZero solution for Kaggle Connect X competition
  • add the champion models of both tracks of Neurips 2020 Learning-to-Run-a-Power-Network challenge
  • add demonstration code of open class "World champion takes you to learn reinforcement learning from scratch"

v1.3

4 years ago

New Features

  • Add the first open-source industrial evolution strategy framework EvoKit
  • Support Multi-Agent RL algorithms, including MADDPG
  • Support multiple GPU training, provide a demonstration of DQN with multi GPU
  • Add SOTA algorithms of continuous control problems, TD3 and SAC
  • Add the champion model and training method of NeurIPS 2019 reinforcement learning competition
  • Compatible with Windows

v1.2

4 years ago

Parallel Training

  1. Using a cluster to maintain the computation resource for parallel training.
  2. Web UI for monitoring the cluster.
  3. Support limiting the memory usage for each remote class.
  4. Tutorial for the use of the cluster.

Example

  1. Add the evolution strategies(ES) algorithm, using the PARL parallel module.
  2. Append the A2C performance on a range of Atari games.
  3. Append the IMPALA performance on a range of Atari games.

Tutorial

  1. Add the official documentation deployed at the readthedocs.
  2. Add a tutorial describing how to build a custom algorithm.
  3. Add a tutorial describing how to use the cluster for parallel computation.

v1.1.1

4 years ago

Frameworks

  • Support tensorboard tool.
  • Add save and restore APIs in parl.Agent.
  • Add exception traceback in remote module.
  • Disentangle basic classes(e,g., parl.Model) and the computation framework.

Examples

  • Refine benchmark performance of A2C example.
  • Simplify QuickStart example.

Papers

  • Collect some papers relative to model-based reinforcemnt learning topic.

v1.1

5 years ago

Documentation

  • Add Chinese version of README in homepage.

Framework

  • Support for distributed training. Add parallelization module parl.remote.
  • Functional APIs to dump and load parameters in numpy arrays. Add get_params and set_params to support getting parameters from parl.Model, parl.Algorithm and parl.Agent.
  • Add IMPALA and A3C algorithms to parl.algorithms.

Examples

  • IMPALA
  • A2C
  • GA3C

v1.0

5 years ago

Framework

  • Support Model, Algorithm and Agent abstractions.
  • Support wrappers for fluid.layers, which can easily share parameters between layers.
  • Support sync_params_to API in Model to synchronize parameters between model and target model directly.

Examples

  • QuickStart
  • DQN
  • DDPG
  • PPO
  • Winning solution of NeurIPS2018-AI-for-Prosthetics-Challenge