How to Use the "docker" Docker Image to Run Your Own Docker daemon

There exists on Docker hub a Docker image called docker. It also has two flavors, "stable" and a "dind" (Docker-in-Docker). What is this image for and what is the purpose of these two different image tags?
Docker, it's important to note, uses a client/server architecture. That
is to say, when you run docker
on the command line, you're using the
Docker client, which connects to a Docker daemon (usually running on
localhost port 2376 or 2375, depending on whether TLS is or is not
enabled, respectively). Simply put, the stable
flavor of the docker
image is intended to be used as a Docker client, and the dind
flavor
is intended to be used as a Docker daemon.
This means that you can run new instances of the Docker daemon and Docker client to create your own isolated Docker workspace. While typically not needed on development machines or even in deployed environments, this can be quite handy for continuous integration (CI) environments where isolation from other build jobs is a must.
Let's briefly explore how these two Docker images work.
First, start up an instance of the docker:dind
image:
docker run --privileged -p 12375:2375 -e DOCKER_TLS_CERTDIR="" docker:dind
You'll notice that this image requires the
--privileged
flag to extend additional privileges to the container. We're also
telling Docker (on your computer) to forward localhost port 12375 to
port 2375 in the container. Finally, for brevity's sake, we passed
-e DOCKER_TLS_CERTDIR=
to tell the docker:dind
image to start with
TLS disabled. (Using TLS is now the recommended and default
configuration; more on this below.)
You should see the output of the Docker daemon as it starts, likely ending with something like this:
time="2020-01-30T13:59:33.999230000Z" level=info msg="API listen on [::]:2375"
time="2020-01-30T13:59:33.999501800Z" level=info msg="API listen on /var/run/docker.sock"
Once the container is launched, you can try connecting to your new Docker daemon from the command line on your computer:
DOCKER_HOST=tcp://localhost:12375 docker ps
You should see an empty list of images, since this Docker daemon is
isolated from the Docker daemon running on your computer. You could now
use this daemon to build Docker images (or do anything else that
requires the docker
command) in an environment that is isolated from
the Docker daemon on your computer.
If you needed to test a different version of the Docker client (and
you're on a Mac, see the note below), you could also use the default
docker
image to connect to your new Docker daemon, like so:
$ docker run -it docker
/ # DOCKER_HOST=tcp://host.docker.internal:12375 docker ps
This starts a new Docker container on your computer, and then inside that container, connects to the port you mapped in the earlier step.
Note: This has been tested successfully only on a Mac; unfortunately
it is not currently possible to access the Docker host from within a
container like this on Linux. If or when that changes we'll update this
post. In the meantime, the docker-compose
approach described below
should work on all operating systems.
Just for fun, we can also package all of this into a simple
docker-compose.yml
file that gives the two containers their own
network to communicate and even supports TLS by sharing the client
certificates to the client container through a volume mount:
version: "3"
services:
docker:
# Starts a Docker daemon at the DNS name "docker"
# Note:
# * This must be called "docker" to line up with the default
# TLS certificate name
# * DOCKER_TLS_CERTDIR defaults to "/certs
image: docker:dind
privileged: yes
volumes:
- certs:/certs/client
docker-client:
# Provides a Docker client container, including the client
# certs generated by the docker:dind container, above.
# Notes:
# * The name of this container doesn't matter
# * The DOCKER_CERT_PATH defaults to /certs/client, the
# same path where the docker:dind image generates the
# client certificate (and that we've mounted inside this
# container).
# You can execute a shell inside this container by running:
# docker-compose exec docker-client sh
image: docker
command: sh -c 'while [ 1 ]; do sleep 1000; done'
environment:
DOCKER_HOST: tcp://docker:2376
volumes:
- certs:/certs/client
volumes:
certs:
Save this to a file named docker-compose.yml
in its own directory, and
then run:
docker-compose up
You'll see the Docker daemon start, this time on port 2376 (the port
for TLS connections, since we didn't disable it by setting
DOCKER_TLS_CERTDIR
to an empty value).
In a separate terminal, start a shell in the docker-client
container:
docker-compose exec docker-client sh
Now, you should be able to execute commands against your dedicated Docker daemon without manually setting any environment variables, and the communication will be encrypted with TLS.
While some of this may feel like overkill for a development environment,
TLS is now the default for the docker:dind
image, and it's helpful to
become familiar with how this works for troubleshooting CI runners, or
any other environment that uses the docker:dind
image.
To clean up when you're done testing, you can run docker-compose down
to remove any containers started with docker-compose
and manually
inspect the output of docker ps
to find any containers still running
that you may wish to stop (via docker stop
). Just make sure to run
those commands directly on your computer, not within one of the nested
Docker client containers.
I hope this tutorial has helped you learn something new about Docker. If you enjoyed this post, you might also want to check out my post on building a production-ready Dockerfile for Django. Thanks for reading, and please comment with any feedback below!