Adding A Dockerized Application To Your Deployment Setup
A roadmap towards starting to use Docker for your deployments, the right way to switch your infra and where to begin.
There are a lot of great services for deploying your application without having to set up servers by hand, or deal with the nitty-gritty of infrastructure. The good ones save time, effort and make it easy to start delivering your product to your users. They are made to solve a particular problem for a particular kind of customer.
Chances are, that when your company, product and team grows, you’ll be approaching the boundaries of “this is made for me” and going beyond them. You start noticing that fitting new extensions to your setup into the way you are doing things right now is getting harder, questions come up more often and you’re spending way more time on issues which were taken care of.
You know that you need to go beyond your current setup - be it for a new third-party service you’ll need to run or a new internal app. In the current ecosystem, Docker turns out to be the self-diagnosed answer to those pains. The hope is, that you can put whatever-needs-to-be-deployed into a Docker image, and run the container some place which takes care of everything. You can, once again, get around dealing with servers, keep your current workflows in place and get back to what matters most: building the product and growing the company.
Unfortunately, the decision to use Docker is made for the wrong reasons, having too little information, without having assessed the complete situation nor having asked the right question. The results are lots of invested time, hidden technical debt, a complicated setup and almost-guaranteed problems with the live deployment. Most of those, are due to your team having to deal with a supposedly-solves-everything technology they haven’t had the chance to get to know properly.
You can do better. Let’s dive into details.
There’s a lot of misconceptions around Docker, and even more when you deal with the huge ecosystem of tools and services built around it. I blame the hype. It’s hard to get a complete picture which is not overly-positive and glossing over real world issue.
Your best bet is talk to peers at other companies, who have experienced the transition, and can tell a story or two of what is was like to migrate to a Docker powered setup. Here are two misconception which you should be aware of:
“Using Docker is easy.” - It’s easy to get started and run something. But it’s also very easy to waste a lot of time while figuring out everything you don’t know, which of your assumptions were wrong, how to dockerize your particular app (or service) and how to adjust your workflows to deal with it in the future. We’re talking about at least a good course, or a book and some tinkering to save time.
“I won’t have to deal with server stuff.” - Oh hell yeah, you’ll have to. Just instead of running an upgrade on your servers (with Ansible or Salt hopefully), you’ll have to rebuild your images (with an update step or re-pinning dependencies) and redeploy. You have to create new images when there are important updates to the OS, libraries and tools. You will have to keep the Docker host up to date in addition. To do it right, you will have to take care of all the usual issues as well: collect logs from your containers and not-ignore them, configure your containers to stay up, monitor them as part of your setup.
“It’s fire and forget. There are platforms which take care of everything.” - Nope. Sorry. Even if you are using an amazing service like Google’s GKE, you’ll have to know a lot about Kubernetes and do nitty-gritty stuff like setting up monitoring with Prometheus and keeping an eye on it. Services like Amazon EC2 Container Service may seem like a battery-included solution, but they are not as simple as that. You’ll still need to learn the ins and outs of using containers, Docker and deployment.
Despite all of the above, you can probably benefit from using Docker in the long term. Docker is an amazing tool for making deploying stuff easier. You package your app into an image, or a set of images, and they don’t care whether you run them on a single machine or in a fancy scalable setup. Let’s look at achievable upsides, and see if they are good-enough reasons for you to keep going.
Automated, reproducibile development environments - If you’re not using Vagrant to bring up one-click dev setups, you can just as well start by using docker-compose and Docker containers. It will be a bit harder though. When your app is Docker-friendly, and you have done the heavy lifting of wiring everything up, any developer can run a complete development setup on their local machine with one command.
Getting closer to dev/prod parity. - The setup may differ in details, but you can work with very similar dependencies as are used on your live system. Even if your production setup is not using Docker, you can start to build a development environment which resembles it a bit closer.
Portability. - The ability to make your app not-care what host machine it runs on. Be it a different linux distro, or a cloud platform. Dockerizing your app is a great time to get closer to the twelve-factor methodology.
Isolation. - Have the ability to run different apps on the same machine, without them getting in the way of each other. That said, they still might use up kernel resource limits though and get in the way of each other.
Proper deployment workflows. - When switching to Docker, you will get an opportunity to reconsider your current deployment workflows. Tagged images are a good way to package up deployment artifacts - if you haven’t done so you will get the opportunity to do better.
Choose Your Battle
You don’t want to jump into running Docker in really-important production as soon as possible. If you can, pick the first option from this list:
- Use it for a small, isolated project. Deploy it and maintain it.
- Use it for your local development environment (instead of/with Vagrant). Replace pieces.
- Include it as a small, redundant part of your current setup. Have the real service as backup.
you will have a way better experience if you go slowly. Make sure, that the thing you’re dockerizing is a good fit for the tech. In the best case, it’s a stateless app. Don’t assume that it will be available at all times. You should plan for failure.
So, You Want To Use Docker
Here is the approach I would recommend, if you are relatively new to Docker, want to migrate to it in the long term and want to use it to simplify your production setup.
If you want to have a new production dependency, and were hoping to use Docker for it right away, please reconsider and just set up a dedicated machine for the service for now, so the development can continue. Yes it will be tedious work, but you’ll have to go through those steps anyway. A big part of Docker, are reproducible, reliable, automated workflows. Let’s start the right way.
While setting up this new machine, document all the steps you need to take from does-not-work to a fully functional state. Write them down and see if you can automate them via a simple bash script. Now, setting up more of those machines to be used for your various environments is not as painful, and you can point your current setups to those machines via environment variables. The day is saved, your team can develop stuff without being blocked by a flaky setup.
Make sure the team can bring up the new dependency for themselves without issues. Share the docs, the script or integrate it into your current Vagrant provisioning process. Shiny.
If you haven’t done so before - start using either Ansible or Salt to keep the configurations of those servers in sync and to provision them in the future, instead of a bash script. Now you only need to make changes in one place, instead of SSH-ing to each machine in turn.
Now, it’s time for Docker goodness. Start integrating Docker into your local development environment, by installing the Docker engine as yet another dependency. Try to create a Dockerfile, which uses the steps you documented previously to create an image. This will take you a while, you’ll learn a lot. If it seems to work, you can begin switching the local development app to work with the containerized dependency instead of the natively-installed one.
You’ll need to learn how containers are not exactly VMs, how you’ll need to use volumes for certain directories, how to pass variables, how to make them play nice with your main app
This will give your team the chance to start getting to know Docker and adjusting their workflows. They’ll also start stumbling over stuff which you missed, and gotchas which you don’t want to find out about in production. You will notice that dealing with Docker in development is an investment, and your processes will need to change. Don’t forget that sharing knowledge in an intentional fashion on common issues will save time. How to interact with the dockerized dependency, how to get logs, how to debug an issue - that’s stuff which your team needs to be comfortable with.
Once the development environment is stable, you can bring the dockerized dependency up in a testing environment and begin exploring the tooling and, once again how your workflows need to be adjusted - you’ll probably start using a private Docker registry (private Docker Hub repos are your friends), integrate image building into your build workflow, and automate deploying the new dependency.
The Way Forward
You arrived at your testing environment, and that’s great progress! Here are the issues you’ll need to mind and now have the opportunity to take care of, regarding your new dockerized service:
- Handling logs.
- Getting notifications when stuff goes wrong.
- Making the dockerized service restart.
- Performing management tasks.
- Adding relevant tests.
- Writing down your standard operating procedures, also known as good ops documentation.
- Adjusting your deployment workflows.
While the next step might seem like “I will use it in staging next, and production later”, this would be a very bad decision. Staging should be as close to production as possible. Bad things happen - also known as hard-to-debug issues, or “stuff which goes boom at night”. Instead, you can move forward by bringing up a second machine, running the same service you started with, but using Docker. Still have the old one as a backup, but switch over to using the new one. After enough time to adapt and find out what you had been missing, you’ll be able to completely switch over.
Docker is great! It’s a very powerful tool, and can make your life better. It is however not an ultimate solution to all deployment issues, and will not save you from having to deal with server stuff. You should make a well informed decision regarding whether it’s a good fit for your current setup and workflows. If will make it easier to do server stuff right in the long term instead. You should start small, take some time to learn, for your team to get on board and for your workflows to adapt.
Only move forward, if you completely understand the reasons for using it, and are sure that your expectations are in sync with achievable outcomes. By the time you’ve migrated enough of your setup to utilize Docker, you’ll have an easier time to switch to Kubernetes. Kubernetes is my favourite way to deploy Docker in production, but that’s more of a personal preference and another topic.