Computer vision container that includes Jupyter notebooks with built-in code hinting, Anaconda, CUDA 11.8, TensorRT inference accelerator for Tensor cores, CuPy (GPU drop in replacement for Numpy), PyTorch, PyTorch geometric for Graph Neural Networks, TF2, Tensorboard, and OpenCV for accelerated workloads on NVIDIA Tensor cores and GPUs.
⚠️ This repo has been deprecated. Please use the Deep Learning Ultra container instead.
For updated code goto: https://github.com/salinaaaaaa/Deep-Learning-Ultra
/app
folder.-DCUDA_ARCH_BIN=7.5
in the OpenCV flags within the Dockerfile, and rebuild the image./app
folderPress tab to see what methods you have access to by clicking tab.
Link to nvidia-docker2 install: Tutorial
You must install nvidia-docker2 and all it's deps first, assuming that is done, run:
sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd
sudo systemctl daemon-reload
sudo systemctl restart docker
How to run this container:
docker build -t <container name> .
< note the . after
If you get an authorized user from the docker pull cmd inside the container, try:
$ docker logout
...and then run it or pull again. As it is public repo you shouldn't need to login.
Run the image, mount the volumes for Jupyter and app folder for your fav IDE, and finally the expose ports 8888
for Jupyter Notebook:
docker run --rm -it --gpus all --user $(id -u):$(id -g) --group-add container_user --group-add sudo -v "${PWD}:/app" -p 8888:8888 -p 6006:6006 <container name>
:P If on Windows 10:
winpty docker run --rm -it --gpus all -v "/c/path/to/your/directory:/app" -p 8888:8888 -p 6006:6006 <container name>
Disclaimer: You should be able to utilize the runtime argument on Docker 19+ as long as it is installed and configured in the daemon configuration file:
Install nvidia-docker2 package https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)#ubuntu-distributions-1
Open another ssh tab, and exec into the container and check if your GPU is registering in the container and CUDA is working:
Get the container id:
docker ps
docker exec -u root -t -i <container id> /bin/bash
nvidia-smi
nvcc -V
tensorboard --logdir=//app --bind_all
TensorBoard 2.1.0 at http://af5d7fc520cb:6006/
Just replace af5d7fc520cb
with the word localhost
and launch in the browser, then you will see:
AppArmor on Ubuntu has sec issues, so remove docker from it on your local box, (it does not hurt security on your computer):
sudo aa-remove-unknown
Install the the nvidia-conatiner-runtime package, install and set-up config is here: https://github.com/NVIDIA/nvidia-container-runtime.
sudo apt-get install nvidia-container-runtime
sudo vim /etc/docker/daemon.json
Then , in this daemon.json
file, add this content:
{
"default-runtime": "nvidia"
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
sudo systemctl daemon-reload
sudo systemctl restart docker
Method 2: Install the container runtime: https://github.com/NVIDIA/nvidia-container-runtime#ubuntu-distributions
Modify the config file: https://github.com/NVIDIA/nvidia-container-runtime#daemon-configuration-file