Important Note fastrl version 2 is being developed at fastrl. Note the link in the readme
The library as it is allows for easy training for DDPG based models and DQN based models. You can also save them, and reload them.
You can use the different interpreter objects for graphing rewards, comparing rewards with other models, viewing episodes at different periods of the agent's training, etc.
Notes: Currently, the next obstacle is memory efficiency. We will be adding more models, but will also be addressing memory issues possibly by off loading to storage.
All gifs are added. There may be a few extra added in later versions, however all of the base env runs are there.
Still does not contain Gifs. This is primarily a test of the azure pipeline publishing packages for us. Once a PR is pushed to master, the new version will automatically be updated in PyPI.
Next Release will have Gifs, then soon after redone readme
Some key take aways with this release:
Why the changes?
Right now some basic model configurations are complete with unit tests. Moving forward, we will be checking the model performance that is expected on a set of environments.