Docker can seem pretty overwhelming. It seems to be doing so much - it’s hard to say what it’s for exactly.
That’s because Docker is a tool which takes care of multiple jobs. If you look at each of those responsibilities by themselves, it’s easier to understand Docker and you’ll have an easier time getting your head around Docker.
The Three Jobs: Packaging, Distributing and Running
It’s easy to mistake a task you want to get done with the most popular tool for it. I have struggling with seeing this myself, until I read the introduction to the excellent “Kubernetes in Action” book by Marko Lukša.
On the highest level, Docker has three main areas of responsibility:
- Packaging apps
- Distributing those packaged apps
- Running workloads
Let’s look at each of those and how Docker takes care of them.
Packaging
When you are building your Docker image, you are effectively packaging your app together with its complete environment. The Docker image works as a build artifact, combining your code and all direct dependencies it needs in form of OS packages and further local setup work.
A Dockerfile, defines a sequence of commands and settings, which build a new Docker image. Usually, you include your own code into the image as part of the process.
Distributing
Once you have built your Docker images, you usually want to use them on another machine. Docker registries are the most popular way for making built Docker images accessible.
After you have built your image, you can use docker push to upload the image (or just the new image layers) to a Docker registry of your choice. You are giving the images distinct names and tags to make them easier to handle. Once an image is pushed, you can use docker pull from other machines to make those images available locally. If those images are private, you can use the Docker CLI to authenticate with the registry before you can access those images.
If the distribution part was missing, you’d have to build new images on each host, or have to export them to tar archives and copying those files around. Possible, but not convenient at all.
Running
After you have packaged up the application you need, and have distributed it to the machine where it’s needed, you can use Docker to run it!
And like a real-world shipping container, the dockerized app shouldn’t be bothered by where it runs as long as Docker is installed and it’s configured properly.
NOTE: Unless you run into 64/32 bit incompatibility or need very specific kernel features. But you probably won’t run into those anytime soon.
You simply specify the Docker image to use and start a container from it, providing the necessary configuration settings (such as environment variables, local directory mounts etc).
In Conclusion
Docker is a tool which has many jobs. Once you know about the three main areas of responsibility, it becomes easier to understand what it’s good for and to get a better overview of how to use it. The three main areas of responsibility (packaging, distributing and running) cover most aspects of basic modern deployment stories.
It’s valuable to distinguish a tool from the roles it is meant to take care of - both to make it easier to understand, but also to realize when the tool is not a good fit anymore. Docker is neat, but not a cure-all. Sometimes it’s a great choice, and other times there are other ways to do things which might fit your situation better.
When it comes to deployments for example, Docker won’t be the only tool you want to use. As your requirements grow more complex, you’ll need to to use additional tools from the ecosystem to take care of responsibilities which Docker does not cover - for example docker-compose for running multiple connected local containers, Portainer for a simple GUI, Kubernetes or Swarm for orchestration.
I hope this article has helped you to understand Docker a bit better, and keep in mind that there is a distinction between the tool and the jobs it is built for.