Docker + TensorFlow + Google Cloud Platform = Love
Make your life easier by Dockerising your TensorFlow
Docker, TensorFlow and Google Cloud Platform logos. Heart by Bohdan Burmich from the Noun Project.
Docker changed my engineering life. I have learnt to love that whale!
When I first installed TensorFlow with GPU support on my Windows laptop years ago, I was horrified at how complex and fragile the process was. I had to repeat this horrific process when I started dual booting Ubuntu on my laptop. I had to relive my past traumas when I got a GPU for my desktop.
What if there was an OS-agnostic way of running TensorFlow that would get you up and running in a matter of minutes?
This is the focus of this post! We will be using a Google Cloud Platform (GCP)
Compute Engine VM as our machine. But you could easily replace this VM with your very own laptop/desktop with a NVIDIA GPU
.
Note: I will assume that you have a GCP account
and that you have the GCP SDK
installed so that you can run GCP commands from your terminal.
Topics we’ll be visiting
Here’s an overview of the topics that this article will cover:
GPU quotas
on GCP- GCP Compute Engine
VM startup scripts
- Our GPU-enabled
TensorFlow Dockerfile
- GCP
Cloud Build
to build our Docker image in the cloud - GCP
Container Registry
, the Docker Hub of GCP - Testing if we have access to our GPU from within our Docker container
Let’s do this!
Do ya got some GPU quota?
When you first get started in GCP, you aren’t allocated a GPU to play with. If you try to make a VM with a GPU with insufficient quota, you’ll get an error telling you that your quota has been exceeded. So let’s fix this right now.
Go to:
IAM & Admin -> Quotas
In the Metrics
drop down, firstly click None
.
Search for GPUs (all regions)
in the text box and click on the result that appeared:
Tick the box in the list below and then click on EDIT QUOTAS
:
Complete the form that appeared to the right of your screen and make a request for at least one GPU:
Now we wait for our approval to come through. This should be quick - I was approved in less than 2 minutes!
Building the VM
Once we have increased our quota, we can get to building a VM with at least one GPU. To accomplish this, we could either go into Compute Engine
in the UI, or we could learn how to use the GCP’s Cloud SDK
. Let’s do the latter!
Say that we want to create a VM in the zone us-west-1b
named deep-docker
. Assuming we have installed the Cloud SDK, we can issue this command in our terminal:
gcloud compute instances create deep-docker \
--zone=us-west1-b \
--accelerator="type=nvidia-tesla-k80,count=1" \
--image-family "ubuntu-1804-lts" \
--image-project "ubuntu-os-cloud" \
--boot-disk-device-name="persistent-disk" \
--boot-disk-size=100GB \
--boot-disk-type=pd-standard \
--machine-type=n1-standard-4 \
--maintenance-policy=TERMINATE \
--metadata-from-file startup-script=./startup.sh
Don’t worry about the metadata-from-file startup-script=...
argument for now. We will explore this in the next section.
Why have we chosen Ubuntu when we can create a VM with a container using gcloud compute instances create-with-container
? Good question! This command creates a VM with a Container-Optimized OS
based on Chromium OS. It’s a lot more complex to install NVIDIA drivers on such a VM, so we make our lives easier by choosing Ubuntu instead. If you’re keen to stick with the Container-Optimised OS, then see this repo for a GPU driver installation solution.
Before we can issue this command, we need to have a startup script present in our current directory. Let’s find out what this startup script is all about!
The startup script
Here is the full startup script.
The startup script takes care of a bunch of tricky things:
- It installs Docker and sets
gcloud
as the Docker credential helper. This will allow us to pull the Docker image that we’ll be building later from GCP’sContainer Registry
. - It installs NVIDIA drivers onto the VM.
- It installs the NVIDIA Container Toolkit, which will allow our Docker container to access the GPUs on our VM.
Let’s finally issue our command and wait for our VM to finish building.
You can track the progress of the startup script by SSH-ing into your machine:
gcloud compute ssh whale@deep-docker --zone=us-west1-b
Once in your VM, issue this and watch your log stream:
tail -f /var/log/syslog
At some point, you should see something like this:
Apr 12 08:09:49 deep-docker startup-script: INFO Finished running startup scripts.
And this is where you can dance a little celebratory dance. The hardest part of this process is over!
Making the startup script run once
An issue with our startup script is that it is run each time our VM boots up. If we frequently reboot our VMs, this will get unnecessarily time-consuming.
One way to make sure that our script is run once only is to remove it from our VM’s metadata using the gcloud
CLI:
gcloud compute instances remove-metadata deep-docker --keys=startup-script
Another way to accomplish this is to follow the suggestion from here. This is the approach that I have taken. In the startup script, you will see that most of it is enclosed in an if
statement:
if test ! -f "$STARTUP_SUCCESS_FILE"; then
...
touch /home/$LOGIN_USER/.ran-startup-script
else
echo "$STARTUP_SUCCESS_FILE exists. not running startup script!"
fi
We decide whether to run the body of our startup script based on whether a file named .ran-startup-script
exists in a particular location. Upon the first boot, that file does not exist, so the body of the if
statement is executed. If all goes well in our first boot of our VM, the .ran-startup-script
should get created by the touch
line, above. On the second boot onwards, all the time-consuming parts of our startup script won’t get executed. We can check /var/log/syslog
to confirm that this is the case:
Apr 12 09:05:58 deep-docker startup-script: INFO startup-script: /home/whale/.ran-startup-script exists. not running startup script!
Apr 12 09:05:58 deep-docker startup-script: INFO startup-script: Return code 0.
The Dockerfile
Here is our Dockerfile. It’s super simple!
- We use a TensorFlow GPU base image with Python 3. At the time of writing, that image is the
tensorflow/tensorflow:2.1.0-gpu-py3
image. - We install JupyterLab.
- We install some other Python packages.
We’ll now build this image.
Build the Docker image in the cloud
The TensorFlow image we’re using is about 2GB in size. Instead of building our Docker image locally and pushing it to Container Registry
from our local machine, we’ll take advantage of the power of GCP and build it in the cloud!
The image that we will be building will be located at gcr.io/GCP_PROJECT_NAME/SOME_IMAGE_NAME
. My project is called learning-deeply
. I want to call the image tf-2.1.0-gpu
. So I will issue this command in my terminal:
REMOTE_IMAGE_NAME=gcr.io/learning-deeply/tf-2.1.0-gpu \
&& gcloud builds submit --tag $(REMOTE_IMAGE_NAME) --timeout=15m
I specify a longer timeout to overcome a timeout issue I was experiencing. Let’s issue our command and watch our build take place!
We can monitor the progress of our build in the GCP Console’s Cloud Build
section:
Once done, let’s head over to Container Registry
section and we should see our beautiful image there!
Fire up our container and check for GPUs
This is exciting! I see you rubbing your palms in anticipation. Let’s see if our hard work has paid off.
Firstly, let’s SSH into our VM (see the startup script
section for how to do this).
Let’s pull our Docker image into our VM! Issue a command similar to this one, replacing the reference to the location of the image with whatever you provided when issuing gcloud builds submit
earlier:
docker pull gcr.io/learning-deeply/tf-2.1.0-gpu:latest
As we have made taken care of Container Registry authentication in our startup script, this should pull your image from Container Registry
.
Next, let’s start up our container. Note that we have a --gpus
argument which exposes all of the GPUs on our VM to our container:
docker run -it -d --name tf --gpus all gcr.io/learning-deeply/tf-2.1.0-gpu
Issue docker ps
and we should see our container running!
Let’s now execute an interactive Bash shell on our container:
docker exec -it tf bash
You should see something beautiful like this:
Now cross your fingers and run this to check if we can access our GPU:
python3 -c "import tensorflow as tf;print(tf.config.list_physical_devices('GPU'))"
A bunch of text will be printed. But if you see something like this at the end, you know that you have succeeded, my friends:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Conclusion
Docker changed the way I work. Not only do I use it for my machine learning work, I also use it for my regular data analysis work and to build this site.
If your job title begins with “Data”, do yourself a favour and learn to use it. You might also learn to love the whale!
Until next time,
Justin