Dplearn Save

Learn Deep Learning The Hard Way

Project README

dplearn

Go Report Card Build Status Godoc

Learn Deep Learning The Hard Way.

It is a set of small projects on Deep Learning.





System Overview

dplearn-architecture

Notes:

  • Why is the queue service needed? To process concurrent users requests. Worker has limited resources, serializing requests into the queue.
  • Why Go? To natively use embedded etcd.
  • Why etcd? To use etcd Watch API. pkg/etcd-queue uses Watch to stream updates to backend/worker and frontend. This minimizes TCP socket creation and slow TCP starts (e.g. streaming vs. polling).

This is a proof-of-concept. In production, I would use: Tensorflow/serving to serve the pre-trained models, distributed etcd for higher availability.

↑ top

Cats vs. Non-Cat

To train cats 5-layer Deep Neural Network model:

DATASETS_DIR=./datasets \
  CATS_PARAM_PATH=./datasets/parameters-cats.npy \
  python3 -m unittest backend.worker.cats.model_test

This persists trained model parameters on disk that can be loaded by workers later.

dplearn-cats

↑ top

Workflow

To run application (backend, web UI) locally, on http://localhost:4200:

./scripts/docker/run-app.sh
./scripts/docker/run-worker-python3-cpu.sh

<<COMMENT
# to serve on port :80
./scripts/docker/run-reverse-proxy.sh
COMMENT

Open http://localhost:4200/cats and try other cat photos:

To update dependencies:

./scripts/dep/go.sh
./scripts/dep/frontend.sh

To update Dockerfile:

# update 'container.yaml' and then
./scripts/docker/gen.sh

To build Docker container images:

./scripts/docker/build-app.sh
./scripts/docker/build-python3-cpu.sh
./scripts/docker/build-python3-gpu.sh
./scripts/docker/build-r.sh
./scripts/docker/build-reverse-proxy.sh

To run tests:

./scripts/tests/frontend.sh
./scripts/tests/go.sh

go install -v ./cmd/backend-web-server

DATASETS_DIR=./datasets \
  CATS_PARAM_PATH=./datasets/parameters-cats.npy \
  ETCD_EXEC=/opt/bin/etcd \
  SERVER_EXEC=${GOPATH}/bin/backend-web-server \
  ./scripts/tests/python3.sh

To run tests in container:

./scripts/docker/test-app.sh
./scripts/docker/test-python3-cpu.sh

To run IPython Notebook locally, on http://localhost:8888/tree:

./scripts/docker/run-ipython-python3-cpu.sh
./scripts/docker/run-ipython-python3-gpu.sh
./scripts/docker/run-r.sh

To deploy dplearn and IPython Notebook on Google Cloud Platform CPU or GPU:

GCP_KEY_PATH=/etc/gcp-key-dplearn.json ./scripts/gcp/ubuntu-python3-cpu.gcp.sh
GCP_KEY_PATH=/etc/gcp-key-dplearn.json ./scripts/gcp/ubuntu-python3-gpu.gcp.sh

# create a Google Cloud Platform Compute Engine VM with a start-up script
# to provision GPU, init system, reverse proxy, and others
# (see ./scripts/gcp/ubuntu-python3-gpu.ansible.sh for more detail)

↑ top

Open Source Agenda is not affiliated with "Dplearn" Project. README Source: gyuho/dplearn
Stars
65
Open Issues
8
Last Commit
1 year ago
Repository

Open Source Agenda Badge

Open Source Agenda Rating