vsupalov

Running Django and PostgreSQL in a Single Docker Container

When working with VMs, it’s not uncommon to put everything onto a single box. When deploying to a server, this is a common pattern as well.

However, when it comes to Docker, you shouldn’t cram all of your applications into a single container. Let’s try to untangle this issue into small, digestible pieces.

The Short Answer

Sure, you can put everything into a single container. It can work, but you should reconsider. You’ll only get the most out of Docker, if you use it in the way it was designed for.

One part of this, is having a single service or even process per container.

Why Are VMs And VPS Different?

VMs are “heavier” than containers. You need more resource overhead to start a new one, it takes more time and it’s harder to do without proper tooling. The same goes for virtual private servers (VPS).

When working with containers on the other hand, the overhead of running multiple containers is negligible. In fact, the fact that bringing up a new container is pretty fast and does not eat much more resources than the containerized application is one of the reasons why they got so popular in the first place.

What Belongs Into a Single Container?

When it comes to containers, you want to have one container for each logical component.

Some people argue that it should be one per process, but one container per “service” is quite alright as well. Here’s a useful rule of thumb when it comes to deciding what belongs together:

  • Will the lifetime of the “bundled things” be similar? For example, you’ll want to restart your application server whenever there’s a new code version to deploy, but the database doesn’t care about that. It would much rather keep on running, thankyouverymuch.
  • Will the “bundled things” be scaled together? You might want to run 3 instances of your application server, but meanwhile the Nginx reverse proxy will be just fine with a single instance.

Subdividing Your Django Setup Into Containers

Now that we established a rule of thumb of what can stay together, how does it look when thinking about a Django application? Here’s an example:

  • Your app, running via Gunicorn - both in the same image. A new version of the image is built whenever there’s a new version of your app to ship.
  • Your PostgreSQL database (even better: use a managed service when the data becomes critical, or consider running the database without Docker).
  • A Redis instance for quick caching needs.

The list could go on, but there’s not much more to it. Have a new service? Just put it into its own container.

But Working With Multiple Containers is Hard!

Don’t worry, just because it seems hard for you at the moment, it doesn’t have to be.

It’s a very valid objection, but more than anything else, it’s pointing at a tooling problem you have. If you want to work with multiple containers, check out docker-compose. It’s a handy tool which makes it easy to bring up and configure multiple containers which talk to each other.

When it comes to keeping your containers running, a good first step would be a process supervisor like systemd. Need to run your containers across multiple machines with lots of services? Maybe it’s time to transition towards an orchestrator like Swarm or Kubernetes. They could help you distribute the load across multiple machines, and make your life easier after an initial learning investment. Otherwise, look into configuration management tools like Ansible or Infrastrucure as Code approaches.

In Conclusion

Is it okay to put multiple applications into a single Docker image? While you certainly can do that, you probably don’t want to do so in the long run.

Instead, consider splitting your setup into multiple containers, which talk to each other. Each of those containers should be focused on running one single service. It’s fine if there are multiple processes running at the same time within a single container. Just make sure that the applications you want to put into the same Docker image would be scaled together, and will have the same lifecycle. If not, or you feel like using a process supervisor (like supervisord, systemd or upstart) inside of your image, you might want to split it up instead. It will make your life easier in the long run.

I hope this article has helped you to learn more about the Docker way of doing things, and to understand the reasoning behind it. If you want to learn more and level up your skills around Docker and deployment, make sure to subscribe to the mailing list below!

Want to create better Django apps?

Subscribe for regular articles about Django!

     (About the content, privacy, analytics and revocation). 

    We won't send you spam. Unsubscribe at any time.

    Powered By ConvertKit