Stable Baselines3 Contrib Versions Save

Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code

v2.3.0

1 month ago

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 2.3.0
  • The default learning_starts parameter of QRDQN have been changed to be consistent with the other offpolicy algorithms
# SB3 < 2.3.0 default hyperparameters, 50_000 corresponded to Atari defaults hyperparameters
# model = QRDQN("MlpPolicy", env, learning_starts=50_000)
# SB3 >= 2.3.0:
model = QRDQN("MlpPolicy", env, learning_starts=100)

New Features:

  • Added rollout_buffer_class and rollout_buffer_kwargs arguments to MaskablePPO
  • Log success rate rollout/success_rate when available for on policy algorithms

Others:

  • Fixed train_freq type annotation for tqc and qrdqn (@Armandpl)
  • Fixed sb3_contrib/common/maskable/*.py type annotations
  • Fixed sb3_contrib/ppo_mask/ppo_mask.py type annotations
  • Fixed sb3_contrib/common/vec_env/async_eval.py type annotations

Documentation:

  • Add some additional notes about MaskablePPO (evaluation and multi-process) (@icheered)

Full Changelog: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/compare/v2.2.1...v2.3.0

v2.2.1

5 months ago

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 2.2.1
  • Switched to ruff for sorting imports (isort is no longer needed), black and ruff version now require a minimum version
  • Dropped x is False in favor of not x, which means that callbacks that wrongly returned None (instead of a boolean) will cause the training to stop (@iwishiwasaneagle)

New Features:

  • Added set_options for AsyncEval
  • Added rollout_buffer_class and rollout_buffer_kwargs arguments to TRPO

Others:

  • Fixed ActorCriticPolicy.extract_features() signature by adding an optional features_extractor argument
  • Update dependencies (accept newer Shimmy/Sphinx version and remove sphinx_autodoc_typehints)

v2.1.0

8 months ago

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

Breaking Changes:

  • Removed Python 3.7 support
  • SB3 now requires PyTorch >= 1.13
  • Upgraded to Stable-Baselines3 >= 2.1.0

New Features:

  • Added Python 3.11 support

Bug Fixes:

  • Fixed MaskablePPO ignoring stats_window_size argument

Full Changelog: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/compare/v2.0.0...v2.1.0

v2.0.0

10 months ago

Warning Stable-Baselines3 (SB3) v2.0 will be the last one supporting python 3.7 (end of life in June 2023). We highly recommended you to upgrade to Python >= 3.8.

SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo Stable-Baselines Jax (SBX): https://github.com/araffin/sbx

To upgrade:

pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade

or simply (rl zoo depends on SB3 and SB3 contrib):

pip install rl_zoo3 --upgrade

Breaking Changes

  • Switched to Gymnasium as primary backend, Gym 0.21 and 0.26 are still supported via the shimmy package (@carlosluis, @arjun-kg, @tlpss)
  • Upgraded to Stable-Baselines3 >= 2.0.0

Bug fixes

  • Fixed QRDQN update interval for multi envs

Others

  • Fixed sb3_contrib/tqc/*.py type hints
  • Fixed sb3_contrib/trpo/*.py type hints
  • Fixed sb3_contrib/common/envs/invalid_actions_env.py type hints

Full Changelog: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/compare/v1.8.0...v2.0.0

v1.8.0

1 year ago

Warning Stable-Baselines3 (SB3) v1.8.0 will be the last one to use Gym as a backend. Starting with v2.0.0, Gymnasium will be the default backend (though SB3 will have compatibility layers for Gym envs). You can find a migration guide here. If you want to try the SB3 v2.0 alpha version, you can take a look at PR #1327.

RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo

To upgrade:

pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade

or simply (rl zoo depends on SB3 and SB3 contrib):

pip install rl_zoo3 --upgrade

Breaking Changes:

  • Removed shared layers in mlp_extractor (@AlexPasqua)
  • Upgraded to Stable-Baselines3 >= 1.8.0

New Features:

  • Added stats_window_size argument to control smoothing in rollout logging (@jonasreiher)

Bug Fixes:

Deprecations:

Others:

  • Moved to pyproject.toml
  • Added github issue forms
  • Fixed Atari Roms download in CI
  • Fixed sb3_contrib/qrdqn/*.py type hints
  • Switched from flake8 to ruff

Documentation:

  • Added warning about potential crashes caused by check_env in the MaskablePPO docs (@AlexPasqua)

v1.7.0

1 year ago

Warning Shared layers in MLP policy (mlp_extractor) are now deprecated for PPO, A2C and TRPO. This feature will be removed in SB3 v1.8.0 and the behavior of net_arch=[64, 64] will create separate networks with the same architecture, to be consistent with the off-policy algorithms.

Note TRPO models saved with SB3 < 1.7.0 will show a warning about missing keys in the state dict when loaded with SB3 >= 1.7.0. To suppress the warning, simply save the model again. You can find more info in issue # 1233

Breaking Changes:

  • Removed deprecated create_eval_env, eval_env, eval_log_path, n_eval_episodes and eval_freq parameters, please use an EvalCallback instead
  • Removed deprecated sde_net_arch parameter
  • Upgraded to Stable-Baselines3 >= 1.7.0

New Features:

  • Introduced mypy type checking
  • Added support for Python 3.10
  • Added with_bias parameter to ARSPolicy
  • Added option to have non-shared features extractor between actor and critic in on-policy algorithms (@AlexPasqua)
  • Features extractors now properly support unnormalized image-like observations (3D tensor) when passing normalize_images=False

Bug Fixes:

  • Fixed a bug in RecurrentPPO where the lstm states where incorrectly reshaped for n_lstm_layers > 1 (thanks @kolbytn)
  • Fixed RuntimeError: rnn: hx is not contiguous while predicting terminal values for RecurrentPPO when n_lstm_layers > 1

Deprecations:

  • You should now explicitely pass a features_extractor parameter when calling extract_features()
  • Deprecated shared layers in MlpExtractor (@AlexPasqua)

Others:

  • Fixed flake8 config
  • Fixed sb3_contrib/common/utils.py type hint
  • Fixed sb3_contrib/common/recurrent/type_aliases.py type hint
  • Fixed sb3_contrib/ars/policies.py type hint
  • Exposed modules in __init__.py with __all__ attribute (@ZikangXiong)
  • Removed ignores on Flake8 F401 (@ZikangXiong)
  • Upgraded GitHub CI/setup-python to v4 and checkout to v3
  • Set tensors construction directly on the device
  • Standardized the use of from gym import spaces

v1.6.2

1 year ago

Breaking Changes:

  • Upgraded to Stable-Baselines3 >= 1.6.2

New Features:

  • Added progress_bar argument in the learn() method, displayed using TQDM and rich packages

Deprecations:

  • Deprecate parameters eval_env, eval_freq and create_eval_env

Others:

  • Fixed the return type of .load() methods so that they now use TypeVar

v1.6.1

1 year ago

Breaking Changes:

  • Fixed the issue that predict does not always return action as np.ndarray (@qgallouedec)
  • Upgraded to Stable-Baselines3 >= 1.6.1

Bug Fixes:

  • Fixed the issue of wrongly passing policy arguments when using CnnLstmPolicy or MultiInputLstmPolicy with RecurrentPPO (@mlodel)
  • Fixed division by zero error when computing FPS when a small number of time has elapsed in operating systems with low-precision timers.
  • Fixed calling child callbacks in MaskableEvalCallback (@CppMaster)
  • Fixed missing verbose parameter passing in the MaskableEvalCallback constructor (@burakdmb)
  • Fixed the issue that when updating the target network in QRDQN, TQC, the running_mean and running_var properties of batch norm layers are not updated (@honglu2875)

Others:

  • Changed the default buffer device from "cpu" to "auto"

v1.6.0

1 year ago

Breaking changes:

  • Upgraded to Stable-Baselines3 >= 1.6.0
  • Changed the way policy "aliases" are handled ("MlpPolicy", "CnnPolicy", ...), removing the former register_policy helper, policy_base parameter and using policy_aliases static attributes instead (@Gregwar)
  • Renamed rollout/exploration rate key to rollout/exploration_rate for QRDQN (to be consistent with SB3 DQN)
  • Upgraded to python 3.7+ syntax using pyupgrade
  • SB3 now requires PyTorch >= 1.11
  • Changed the default network architecture when using CnnPolicy or MultiInputPolicy with TQC, share_features_extractor is now set to False by default and the net_arch=[256, 256] (instead of net_arch=[] that was before)

New Features

  • Added RecurrentPPO (aka PPO LSTM)

Bug Fixes:

  • Fixed a bug in RecurrentPPO when calculating the masked loss functions (@rnederstigt)
  • Fixed a bug in TRPO where kl divergence was not implemented for MultiDiscrete space

v1.5.0

2 years ago

Breaking Changes:

  • Switched minimum Gym version to 0.21.0.
  • Upgraded to Stable-Baselines3 >= 1.5.0

New Features:

  • Allow PPO to turn of advantage normalization (see PR #61) @vwxyzjn

Bug Fixes:

  • Removed explict calls to forward() method as per pytorch guidelines