Model-based Reinforcement Learning Framework
Baconian [beˈkonin] is a toolbox for model-based reinforcement learning with user-friendly experiment setting-up, logging and visualization modules developed by CAP. We aim to develop a flexible, re-usable and modularized framework that can allow the users to easily set-up their model-based RL experiments by reusing modules we provide.
You can easily install by (with python 3.5/3.6/3.7, Ubuntu 16.04/18.04):
# install tensorflow with/without GPU based on your machine
pip install tensorflow-gpu==1.15.2
# or
pip install tensorflow==1.15.2
pip install baconian
For more advance usage like using Mujoco environment, please refer to our documentation page.
For previous news, please go here
We support python 3.5, 3.6, and 3.7 with Ubuntu 16.04 or 18.04. Documentation is available at http://baconian-public.readthedocs.io/
Sutton, Richard S. "Dyna, an integrated architecture for learning, planning, and reacting." ACM Sigart Bulletin 2.4 (1991): 160-163.
Abbeel, P. "Optimal Control for Linear Dynamical Systems and Quadratic Cost (‘LQR’)." (2012).
Abbeel, P. "Optimal Control for Linear Dynamical Systems and Quadratic Cost (‘LQR’)." (2012).
Garcia, Carlos E., David M. Prett, and Manfred Morari. "Model predictive control: theory and practice—a survey." Automatica 25.3 (1989): 335-348.
Kurutach, Thanard, et al. "Model-ensemble trust-region policy optimization." arXiv preprint arXiv:1802.10592 (2018).
Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013).
Schulman, John, et al. "Proximal policy optimization algorithms." arXiv preprint arXiv:1707.06347 (2017).
Lillicrap, Timothy P., et al. "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971 (2015).
Rao, Anil V. "A survey of numerical methods for optimal control." Advances in the Astronautical Sciences 135.1 (2009): 497-528.
Nagabandi, Anusha, et al. "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning." 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.
Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." The Journal of Machine Learning Research 17.1 (2016): 1334-1373.
Thanks to the following open-source projects:
If you find Baconian is useful for your research, please consider cite our demo paper here:
@article{
linsen2019baconian,
title={Baconian: A Unified Opensource Framework for Model-Based Reinforcement Learning},
author={Linsen, Dong and Guanyu, Gao and Yuanlong, Li and Yonggang, Wen},
journal={arXiv preprint arXiv:1904.10762},
year={2019}
}
If you find any bugs on issues, please open an issue or send an email to me ([email protected]) with detailed information. I appreciate your help!