Mushroom Rl Versions Save

Python library for Reinforcement Learning.

1.10.1

3 weeks ago
  • Fixed loading of alpha parameter in the SAC algorithm

1.10.0

6 months ago
  • Implemented record interface to record videos of environments
  • Updated MuJoCo interface with support for multiple environments XMLs
  • Updated MuJoCo viewer with headless rendering, different backend support, advanced functionalities and options, and multiple view support.
  • Improved SAC algorithm.
  • Bugfixes and code cleanup

1.9.2

11 months ago

Minor release with bugfixes and improvements:

  • fixed MuJoCo viewer window scaling on MacOS
  • improved polynomial features and Gaussian radial basis functions
  • new ProMP policy added
  • Fixed bug in BoltzmannTorchPolicy, now the policy can be used properly with PPO and TRPO
  • minor bugfixes in serialization

1.9.1

1 year ago

Minor changes to the MuJoCo interface:

  • Updated to support the latest version of MuJoco 2.3.2
  • Added support to reset MuJOCo environment states using an observation

1.9.0

1 year ago
  • Removed every Cython dependency, the package is easier to install now!
  • Removed the humanoid environment, which depended on Cython
  • Improved PyBullet environments
  • New MuJoCo interface using native Deepmind MuJoCo bindings
  • New air hockey environments implemented with MuJoCo
  • The core now collects environment info and this info is passed to the fit method of the agent. This breaks the previous MushroomRL interface but allows supporting different kinds of algorithms (e.g. safe RL approaches)
  • Improvements in the documentation
  • Minor updates and bug fixes

1.7.2

1 year ago
  • Added plotting functionality, previously from MushroomRL Benchmark
  • Fixed MuJoCo interface
  • Added missing discount factor to eNAC update
  • Gym Real-time rendering
  • Pybullet interface now enforces joint torque limits

1.7.1

2 years ago
  • Improved documentation;
  • Added MORE algorithm;
  • Added Quantile Regression DQN algorithm;
  • Added wrappers for Minigrid, Habitat, iGibson (thanks to @sparisi);
  • Added AirHockey environments (still experimental, these environments will probably change in the future);
  • Upgraded to new OpenAI gym version;
  • Bug fixes in NoysiDQN, LSPI;
  • Fixed ClippedGaussianPolicy, now works as expected;
  • improved DMControl environment, added pixel support and arm environments e.g., 'manipulator' (thanks to @jdsalmonson).

1.7.0

2 years ago
  • Agent and Environment interfaces are now in core.py module.
  • Added an easy interface for environment registration: environment can be created using the environment name;
  • Updated Doc;
  • New tutorials added;
  • Improved CONTRIBUTING.md file;
  • Added ConstrainedREPS;
  • Bug fixed in GPOMDP;
  • Improved logging of loss in regressor fit function;
  • General cleanup of environment constructors;
  • Improved Pybullet environment;
  • Improved Voronoi tiles;
  • Predict params added in DQN and Actor-Critic algorithms;
  • Added support to Logger in DQN.

1.6.1

3 years ago
  • Replay memory can return truncated n-step return;
  • Rainbow and NoisyDQN algorithms added;
  • Improved PyBullet environment;
  • Added clipped Gaussian policy;
  • Prediction parameters added in policy and approximator.

1.6.0

3 years ago
  • Added MushroomRL logger;
  • Support for wrapper args in gym environment;
  • Fixes in tiles;
  • Dueling DQN added;
  • MDPInfo and spaces are now serializable;
  • Optimizers are now serializable;
  • DoubleFQI and BoostedFQI split into separate modules;
  • Minor bug fixes.