Applied Reinforcement Learning Save

Reinforcement Learning and Decision Making tutorials explained at an intuitive level and with Jupyter Notebooks

Project README

Applied Reinforcement Learning

I've been studying reinforcement learning and decision-making for a couple of years now. One of the most difficult things that I've encountered is not necessarily related to the concepts but how these concepts have been explained. To me, learning occurs when one is able to make a connection with the concepts being taught. For this, often an intuitive explanation is required, and likely a hands-on approach helps build that kind of understanding.

My goal for this repository is to create, with the community, a resource that would help newcomers understand reinforcement learning in an intuitive way. Consider what you see here my initial attempt to teach some of these concepts as plain and simple as I can possibly explain them.

If you'd like to collaborate, whether a typo, or an entire addition to the text, maybe a fix to a notebook or a whole new notebook, please feel free to send your issue and/or pull request to make things better. As long as your pull request aligns with the goal of the repository, it is very likely we will merge. I'm not the best teacher, or reinforcement learning researcher, but I do believe we can make reinforcement learning and decision-making easy for anyone to understand. Well, at least easier.

Table of Contents

Notebooks Installation

This repository contains Jupyter Notebooks to follow along with the lectures. However, there are several packages and applications that need to be installed. To make things easier on you, I took a little longer time to setup a reproducible environment that you can use to follow along.

Install git

Follow the instructions at (https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)

Install Docker

Follow the instructions at (https://docs.docker.com/engine/getstarted/step_one/#step-2-install-docker)

Run Notebooks

TL;DR version

  1. git clone [email protected]:mimoralea/applied-reinforcement-learning.git && cd applied-reinforcement-learning
  2. docker pull mimoralea/openai-gym:v1
  3. docker run -it --rm -p 8888:8888 -p 6006:6006 -v $PWD/notebooks/:/mnt/notebooks/ mimoralea/openai-gym:v1

A little more detailed version:

  1. Clone the repository to a desired location (E.g. git clone [email protected]:mimoralea/applied-reinforcement-learning.git ~/Projects/applied-reinforcement-learning)
  2. Enter into the repository directory (E.g. cd ~/Projects/applied-reinforcement-learning)
  3. Either Build yourself or Pull the already built Docker container:
    3.1. To build it use the following command: docker build -t mimoralea/openai-gym:v1 .
    3.2. To pull it from Docker hub use: docker pull mimoralea/openai-gym:v1
  4. Run the container: docker run -it --rm -p 8888:8888 -p 6006:6006 -v $PWD/notebooks/:/mnt/notebooks/ mimoralea/openai-gym:v1

Open the Notebooks in your browser:

  • http://localhost:8888 (or follow the link that came out of the run command about which will include the token)

Open TensorBoard at the following address:

  • http://localhost:6006

This will help you visualize the Neural Network in the lessons with function approximation.

Docker Tips

  • If you'd like to access a bash session of a running container do:
    ** docker ps # will show you currently running containers -- note the id of the container you are trying to access
    ** docker exec --user root -it c3fbc82f1b49 /bin/bash # in this case c3fbc82f1b49 is the id
  • If you'd like to start a new container instance straight into bash (without running Jupyter or TensorBoard)
    ** docker run -it --rm mimoralea/openai-gym:v1 /bin/bash # this will run the bash session as the Notebook user
    ** docker run --user root -e GRANT_SUDO=yes -it --rm mimoralea/openai-gym:v1 /bin/bash # this will run the bash session as root
Open Source Agenda is not affiliated with "Applied Reinforcement Learning" Project. README Source: mimoralea/applied-reinforcement-learning

Open Source Agenda Badge

Open Source Agenda Rating