Don't waste time on setting up a deep learning environment while you can get a deep learning environment with everything pre-installed.
- conda
- Jupyter lab
- Matplotlib
- NLTK
- Numpy
- Pandas
- Plotly
- PyTorch
- Scikit-Learn
- Seaborn
- TensorFlow
- zellij
You can see the full list of tags https://hub.docker.com/r/matifali/dockerdl/tags.
- Docker
- nvidia-container-toolkit 1
- Linux, or Windows with WSL2
docker run --gpus all --rm -it -h dockerdl matifali/dockerdl bash
docker run --gpus all --rm -it -h dockerdl -p 8888:8888 matifali/dockerdl jupyter lab --no-browser --port 8888 --ServerApp.token='' --ip='*'
Connect by opening http://localhost:8888 in your browser.
git clone https://github.com/matifali/dockerdl.git
Modify the corresponding [Dockerfile]
to add or delete packages.
Note
You may have to rebuild the dockerdl-base
if you are building a custom image and then use it as a base image. See Build section.
The following --build-arg
are available for the dockerdl-base
image.
Argument | Description | Default | Possible Values |
---|---|---|---|
USERNAME |
User name | coder |
Any string or $USER |
USERID |
User ID | 1000 |
$(id -u $USER) |
GROUPID |
Group ID | 1000 |
$(id -g $USER) |
CUDA_VER |
CUDA version | 12.4.1 |
|
UBUNTU_VER |
Ubuntu version | 22.04 |
22.04 , 20.04 , 18.04 |
Warning
Not all combinations of --build-arg
are tested.
Build the base image
docker build -t dockerdl-base:latest --build-arg USERNAME=coder --build-arg CUDA_VER=12.4.1 --build-arg UBUNTU_VER=22.04 -f base.Dockerfile .
Build the image you want with the base image as the base image.
docker build -t dockerdl:tf --build-arg TF_VERSION=2.12.0 -f tf.Dockerfile .
or
docker build -t dockerdl:torch --build-arg -f torch.Dockerfile .
- Install Coder. (https://github.com/coder/coder).
- Use deeplearning template which references these images (https://github.com/matifali/coder-templates/tree/main/deeplearning).
Follow the instructions here.
If you find any issue please feel free to create an issue and submit a PR.
Footnotes
-
This image is based on nvidia/cuda and uses nvidia-container-toolkit to access the GPU. ↩