Rlberry Versions Save

An easy-to-use reinforcement learning library for research and education.

v0.2

2 years ago

Improving interface and tools for parallel execution (#50)

  • AgentStats renamed to AgentManager.
  • AgentManager can handle agents that cannot be pickled.
  • Agent interface requires eval() method instead of policy() to handle more general agents (e.g. reward-free, POMDPs etc).
  • Multi-processing and multi-threading are now done with ProcessPoolExecutor and ThreadPoolExecutor (allowing nested processes for example). Processes are created with spawn (jax does not work with fork, see #51).

New experimental features (see #51, #62)

  • JAX implementation of DQN and replay buffer using reverb.
  • rlberry.network: server and client interfaces to exchange messages via sockets.
  • RemoteAgentManager to train agents in a remote server and gather the results locally (using rlberry.network).

Logging and rendering:

  • Data logging with a new DefaultWriter and improved evaluation and plot methods in rlberry.manager.evaluation.
  • Fix rendering bug with OpenGL (bf606b44aaba1b918daf3dcc02be96a8ef5436b4).

Bug fixes.

v0.1

3 years ago