How to Tell That Your Docker Setup Is Broken
Docker always had a very compelling first-usage story. Just install Docker, and you can run your first containerized application right away. Want to create your own image? Just write a Dockerfile with a few lines.
Easy, convenient and, unfortunately, not that simple.
Getting started with Docker is easy, but it takes time and experience to avoid even the most common ways to build broken images or run unreliable and insecure containers.
Here are a few symptoms and questions you can use to find out whether you might be having Docker issues without realizing it.
Frequently rebuild your Docker image during development
Development workflows and deployment workflows serve different purposes. You don’t need to rebuild your development Docker image on every code change, you should rethink your workflows instead.
Did you choose to use Alpine without giving it more thought?
Going with Alpine as a default choice can lead to trouble down the line, depending on your language of choice. If you’re using Python for example, you might stumble into bugs because Alpine uses musl instead of glibc, and they don’t behave identically. Consider using a “classic” base image instead.
Sometimes you just need to restart everything
Docker can make it easy to create reproducible, reliable environments. If they are misconfigured, and something fails silently restarting can be the only way to get them working again. If you are addicted to restarts, there might be serious underlying problems you shouldn’t ignore.
Containers run as root
Almost always, your dockerized process will run as root except if you’re using a well-made image. If you haven’t put in effort to create a non-root user for your dockerized application, your process running as root. It almost certainly shouldn’t.
You never heard of an init process
PID 1 is special in any Linux environment, and it has certain responsibilities which most processes don’t handle well. As with the root user, if you haven’t taken care to make sure that you have an init process in place your dockerized app is running with PID 1. Tini (by now built into Docker) can help you around this one.
Your CMD is written in shell form
A frequent gotcha, which causes an additional (unnecessary) shell process to be launched if you don’t watch out. Does your CMD line look like this?
# NO! CMD some commands go here
That’s the shell form, and it will cause signals to your dockerized app to get lost. Use the exec form instead:
# better CMD ["some", "commands", "go", "here"]
Containers react to signals slowly
This might be the reason why you need to wait when stopping your docker-compose services. The signal doesn’t reach your process, and the container gets killed after a timeout. Having an init process and handling signals in your entrypoint script will help here.
Your Images are slow to build
This is a smell, more than anything else. Sometimes images just take long to build. Sometimes you need to use the Docker cache better, or you might want to look into what BuildKit has to offer for those sweet-sweet cache mounts.
Docker workflows feel very tedious
“Don’t make me think!”
Seriously, coding is hard enough. Why make it harder by having to remember lengthy commands to interact with your dockerized apps. At the very least, you might want to use a simple Makefile to simplify frequent workflows. Having a proper README section is a must.
Dockerized applications are hard to debug
This can be a side effect of building “too small” images. Sometimes, it’s cheaper to include useful toolinig than squeezing out a few more MB out of your Docker image. This is a sign of optimizing for a single (easy to measure) metric, instead of aiming for a good mix of tradeoffs to make your development life easier.
You never used a linter over your Dockerfiles
Hadolint is great! It can take a look at your Dockerfile and point out easy things you’re missing. There are other tools which can automatically help you to spot other flaws as well.
I hope this list of symptoms and questions helped to spark your curiosity, and will lead you on to making sure that the way you’re using Docker is sound enough for your requirements. (I’m a firm believer in aiming for good-enough solutions myself.)