A server with Docker, part 5.1: Towards a simple web app container
The fifth part in a series about the process of creating a flexible, multi-purpose webserver with Docker on a Digital Ocean droplet. Unlike the previous articles, this will be a series of smaller, more focused posts. In this one, we will take a look around, make some arbitrary choices, and get started, working on a container setup able to harbour a very simple webapp.
In the previous post, we got to serve static content via Nginx and have a basic Gitolite installation take care of all basic code versioning needs in addition to the progress made before. Meanwhile a Docker daemon is already up and running on the server, with nothing to do! In this post, I would like to start with the long overdue topic of getting useful containerized applications to do some work. As a first goal, let’s start with a very simple dynamic web application written in Python and put it into a container. Simple, in the sense that it will produce dynamic responses, will not require any complex auxiliary services and be stateless. In addition, the app is assumed not to be mission critical, this way we can skip redundancy and availability considerations entirely for now. Overall, the closer it is to ‘The Twelve-Factor App’ methodology, the better. By the way, if you have not had the chance to read this particular set of guidelines, I can only highly recommend doing so. It is written by the glorious people behind Heroku, based on real experience and very practical.
Alright, But How?
In the case of a web app built, for example, with Flask, we will want to have a few components to run it properly. Important building blocks which come to mind, are a way to serve static content and route traffic (Nginx), something to actually run the Python application in a sensible manner (uWSGI ) and the ability to phone home using emails meant for internal consumption, with the goal of simple monitoring and debugging purposes. Some null-mailer setup (only able to send out mails) will be absolutely sufficient for this.
The only question left is, how to accomodate this setup in terms of containers? There are two prevalent approaches of structuring containerizing applications, both can be preferable, depending on the task at hand. They are at two different ends of the commonly acceptable spectrum, which ranges from single-process containers to single-service containers. In the second case, logical services are meant, which in turn can consist of multiple OS processes each. What is in general frowned upon, is treating containers like pseudo-VM, namely introducing manual changes after they have been started, thus making them unique and not disposable anymore, as well as any other kind of pet-like handling in the pet-cattle analogy. We will try to stay clear of it.
Based on the above constraints and choices, it will be best to go with a single-service container to get started. This will make it easier to develop, operate and maintain the container setup. The main benefits of this approach lie in the facts that we keep the setup and environment simple, with a minimal operations overhead while still harvesting most benefits of running containerized applications. Unlike with larger applications, we can skip the consideration of issues such as orchestration, practical scaling details, availability, log forwarding, serious monitoring and the wiring of external auxiliary services.
With all of this said, we can get to the practical side of laying the foundations for the new container! The phusion-baseimage is a solid starting point for a single-container web app. Among others, the image includes a custom init process, the possibility to gracefully run and terminate (supervise) multiple processes through the lightweight runit and also to have cron and syslog running out of the box if they should be needed. By the way, it is also a pretty nice read. As an alternative, we could just as well pick a Linux distribution image and go from there, but this would involve a bit more boilerplate work and yield approximately the same result, just with different components.
There have been some in-depth discussions on many aspects and design choices of this baseimage. The gist of @jpetazzo was that it is a nice gateway to start using containers, but might eventually get in your way. The aspect of running an SSH daemon inside of a container caused the most controversy and was quickly followed by the emergence of nsenter, a tool to easily enter container namespaces. Recently, the Phusion guys published a blog post explicitly and there are regular revisions of the baseimage, which in part have addressed many of the critiqued points.
Although many of the fixed issues described in the baseimage salespage can either be solved in a different fashion (supervisor) or are not quite relevant anymore (the apt fix mentioned), I find the baseimage to be well crafted, great to get started and perfectly suited for what we want to achieve. The container might end up distantly resembling “a small VPS”, but in this case it is exactly what we would like to achieve. As with everything else, it is fine if used thoughtfully and with caution instead of always and for no particular reason :)
We will not use a SSH daemon running in the container, it is disabled by default anyway in the current baseimage version. We will go the extra step and nuke all traces, as for debug purposes, we can execute arbitrary commands in the same environment easily (even without nsenter, as the exec command is integrated into Docker by now). Syslog-ng, logrotate and cron might come in handy sometime and will be kept around, although they should not be strictly needed.
On To The Container
Let’s create our baseimage deriving it with small modifications! You can find the code on GitHub and the finished image on the Docker Hub Registry, ready to be pulled. The Dockerfile, looks the following way:
# based on https://github.com/phusion/baseimage-docker FROM phusion/baseimage:0.9.16 MAINTAINER th4t ENV REFRESHED_AT 2015-01-25 # set correct environment variables ENV HOME /root # disable SSH, this is not really necessary in the newest baseimage anymore RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh # clean up APT when done RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # the custom init process ENTRYPOINT ["/sbin/my_init"] # -- starts all runit services, and executes the next command # -l makes the bash to be invoked as a login shell CMD ["--" ,"bash", "-l"]
The REFRESHED_AT variable is a trick from the Docker Book by James Trunbull, it has a few more very neat hints and is a great start into working with Docker. I hope that the comments will leave no open questions.
In addition, I usually create a Makefile for all Docker related projects, to build and images in a simple manner
build: docker build -t th4t/baseimage . run: # you can detach and leavev it running # in the background with ctrl-p ctrl-q # later you can attach with "docker attach container_id" docker run -i -t th4t/baseimage .PHONY: build run
Try running the new container! Just by issuing ‘make run’, you will get into a bash and can take a look around, as soon as you exit the bash session, my_init will exit gracefully and bring all runit services down and give useful output.
That’s it for now. In the next post, we will start building the actual Dockerfile for the web app container. It will be based on this baseimage, add the application code, our additional services and their configuration files. By the end of this series, we will have a perfectly self-contained container, able to serve simple dynamic webapps on a single exposed port. If you are interested in Docker and similar posts, please sign up to the mailing list below!