You Don't Need to Rebuild Your Development Docker Image on Every Code Change
Local development in Docker can feel really slow. All you wanted was to have reproducible environments, but the price to pay are painful waiting times, and way more commands.
If you are using docker build
frequently and your containers need to be restarted a lot, this post will help you to save some time.
Mount Your Code Directory Into the Container
While it makes sense to create a complete Docker image when deploying your code, you don’t need to do it when developing.
For production, you want to have a complete build artifact for each deployment. Your development environment however, should be designed to allow for quick iterations instead.
You can share your code with a started container by using bind mounts, instead of creating a new image on each change. Here’s how you can mount a local directory “./source_dir” when starting a new container using the Docker CLI:
$ docker run -it -v "$(pwd)/source_dir:/app/target_dir" ubuntu bash
# OR
$ docker run -it --mount "type=bind,source=$(pwd)/source_dir,target=/app/target_dir" ubuntu bash
You can read more about it in the docs about bind mount. The -v
is the old way to do it,
you can provide it with an absolute path to a host directory and tell it where to mount it inside of the container after a colon.
The --mount
flag is newer, and a bit more verbose.
We’re looking the current directory up using
$(pwd)
dynamically. When you run this command, the$(pwd)
part is replaced by your current directory path.
If you want to save on typing, consider using docker-compose and a docker-compose.yml files to configure your Docker containers. Here’s an example docker-compose.yml file mounting a local directory:
version: '3'
services:
example:
image: ubuntu
volumes:
- ./source_dir:/app/target_dir
command: touch /app/target_dir/hello
The example above will create a new file in the shared folder and exit the container when you run docker-compose up
.
Faster Already!
If you use bind mounts as shown above to share your project directory with a running container, you’ll be able to reuse your dev Docker image a lot. The content of the local directory overwrites the one from the image when the container is started. You only need to build the image once, and use it until the installed dependencies (like Python packages) or OS-level package versions need to be changed. Not every time your code is modified.
Just because you’re mounting the code directory, does not mean you can’t ADD code to the image. It’s perfectly fine to build your image with the requirements.txt file, to install Python dependencies based on it.
If you run a development server inside of the container, it will pick up changes to the mounted code files as they happen. You don’t need to restart it - the folder is shared and the changes you make locally will be noticed quickly by the dockerized process (if it’s built to notice those changes, but most development servers do that).
Reducing the number of times when you have to rebuild your Docker image is the easiest first step towards speeding up your dockerized development workflow. There are quite a few more techniques and tricks you can use to make your experience even better, but that’s a topic for another time, once the basics are in place.
In Conclusion
You don’t need to rebuild your Docker image in development for each tiny code change. If you mount your code into your dev container, you don’t have to build a new image on every code change and iterate faster.
It’s a great feeling when you make changes and see the results right away! Using a bind mount to share code between your local machine and the container is a great first step. There are a lot more tricks which you can use to get the most out of your dockerized development experience.