Should You Use The Same Dockerfile For Dev, Staging And Production Builds?
Nope, rather not. Here is the reasoning why.
Docker images can be a great way to create reproducible, automated environments. You’ve gotten your app to work locally with Docker - everything is up with a single command and you are ready to start planning. You’ll want to use Docker for development and to run your app in testing for now, to learn enough to move to production eventually.
But what about the details? Should you try to use one single Dockerfile for all environments of the project or should you have individual ones? How many are too many? Won’t it be too much work to keep them all in sync?
An application has different environments where it needs to be functional. In each, there’s a slightly different way it is handled, different scenarios and goals. Before discussing how the Dockerfile should look, let’s take a quick glance at the different environments and what we want to do in each of them.
When developing on a local machine, you need to see changes which you make in the code have an impact in the development server right away. You want to be able to install new dependencies, have as much access as possible to debug without breaking a leg and iterate quickly. If you’re using Docker for your development environment, you’ll want to mount the code from your local machine into the container, run a development server and execute management commands frequently. Every once in a while, you’ll want to restart everything from scratch.
In a testing environment, you don’t need to have direct dev access. Instead, it’s meant to share results, find bugs and run automated tests. Starting a container from an image quickly is important to keep testing times low. You’ll usually want to have testing dependencies and tracing tools installed.
The staging and production environments however, don’t need to be mutable. In fact, they should not be. You don’t need testing dependencies or direct access. They should be as similar as possible, differing only in the environment variables provided to each. Automated tests are executed on staging, but they should not require the setup to be different from production. Starting containers from images in a predictable and quick fashion, while having them run as stable as possible is one of the most important requirements.
Looking back at those environments and their individual peculiarities, we can make an educated decision. Staging and production should be using the same image, built from the same Dockerfile to guarantee that they are as-similar-as-possible. If the size of the image is not of essence, and we’re very sure that there will be no negative impact on the performance, those can also contain testing dependencies and dev tools. In this case, we’d have one single Dockerfile which builds an image which is suitable for development, testing, staging and production.
I’d argue against using a single Dockerfile however. You don’t want your production environment to have more moving parts than are required. The effort of maintaining separate Dockerfiles is not high enough and might be less than trying to create one single Dockerfile which suits every environment. This will prevent silly issues and lower the chances of messing up in a hard-to-detect fashion. Having a Dockerfile for your production setup, another one for testing which includes parts which are not necessary for production are the way to go.
For your development environment, you could use your testing-image with a few mounted volumes and binds to local directories, maintain another Dockerfile tuned to the task at hands, or choose another set of tools like Vagrant with Ansible if it suits your dev workflows and developer taste better. Take a look at Docker Compose for easily setting port bindings and environment variables in your dockerized development environment.