Learn Deep Learning The Hard Way
Learn Deep Learning The Hard Way.
It is a set of small projects on Deep Learning.
frontend
implements user-facing UI, sends user requests to backend/*
.backend/web
schedules user requests on pkg/etcd-queue
.backend/worker
processes jobs from queue, and writes back the results.frontend
to backend/web
is defined in backend/web.Request
and frontend/app/request.service.Request
.backend/web
to frontend
is defined in pkg/etcd-queue.Item
and frontend/app/request.service.Item
.backend/web
and backend/worker
is defined in pkg/etcd-queue.Item
and backend/worker/worker.py
.Notes:
embedded etcd
.pkg/etcd-queue
uses Watch to stream updates to backend/worker
and frontend
. This minimizes TCP socket creation and slow TCP starts (e.g. streaming vs. polling).This is a proof-of-concept. In production, I would use: Tensorflow/serving to serve the pre-trained models, distributed etcd
for higher availability.
To train cats
5-layer Deep Neural Network model:
DATASETS_DIR=./datasets \
CATS_PARAM_PATH=./datasets/parameters-cats.npy \
python3 -m unittest backend.worker.cats.model_test
This persists trained model parameters on disk that can be loaded by workers later.
To run application (backend, web UI) locally, on http://localhost:4200:
./scripts/docker/run-app.sh
./scripts/docker/run-worker-python3-cpu.sh
<<COMMENT
# to serve on port :80
./scripts/docker/run-reverse-proxy.sh
COMMENT
Open http://localhost:4200/cats and try other cat photos:
To update dependencies:
./scripts/dep/go.sh
./scripts/dep/frontend.sh
To update Dockerfile
:
# update 'container.yaml' and then
./scripts/docker/gen.sh
To build Docker container images:
./scripts/docker/build-app.sh
./scripts/docker/build-python3-cpu.sh
./scripts/docker/build-python3-gpu.sh
./scripts/docker/build-r.sh
./scripts/docker/build-reverse-proxy.sh
To run tests:
./scripts/tests/frontend.sh
./scripts/tests/go.sh
go install -v ./cmd/backend-web-server
DATASETS_DIR=./datasets \
CATS_PARAM_PATH=./datasets/parameters-cats.npy \
ETCD_EXEC=/opt/bin/etcd \
SERVER_EXEC=${GOPATH}/bin/backend-web-server \
./scripts/tests/python3.sh
To run tests in container:
./scripts/docker/test-app.sh
./scripts/docker/test-python3-cpu.sh
To run IPython Notebook locally, on http://localhost:8888/tree:
./scripts/docker/run-ipython-python3-cpu.sh
./scripts/docker/run-ipython-python3-gpu.sh
./scripts/docker/run-r.sh
To deploy dplearn
and IPython Notebook on Google Cloud Platform CPU or GPU:
GCP_KEY_PATH=/etc/gcp-key-dplearn.json ./scripts/gcp/ubuntu-python3-cpu.gcp.sh
GCP_KEY_PATH=/etc/gcp-key-dplearn.json ./scripts/gcp/ubuntu-python3-gpu.gcp.sh
# create a Google Cloud Platform Compute Engine VM with a start-up script
# to provision GPU, init system, reverse proxy, and others
# (see ./scripts/gcp/ubuntu-python3-gpu.ansible.sh for more detail)