vsupalov

Using Packer to Build Custom AWS AMIs in Different Regions

A Project Structure which helps you reuse work, looks fancy and lets you handle multiple AMIs gracefully.

April 29, 2016

Packer is a great tool for building machine images. Among supported platforms are also Amazon Machine Images (AMIs) for Amazon Web Services (AWS). To install packages you need and configure everything to taste, you can use a multitude of tools, such as Chef, Puppet, Salt or simple bash scripts.

Even if you focus on AWS exclusively and disregard the other platforms, you can create the same machine images from scratch in any region after tuning the configuration once, track the changes in a single code repository and upgrade your infrastructure with little effort.

After going through many projects which benefited from immutable infrastructure I ended up using the following project structure which is nice to use, either when prototyping or performing maintenance work on small projects months down the line where complete automation would be overkill.

Whenever I want to build a new AMI, it is done with a single command:

$ ./build.sh set/region/us-east-1.json set/image/docker.json

Just set a region and an image, and the rest is taken care of. The command plays nicely with terminal autocompletion and is intuitive to type and easy to read.

An Overview

Here’s an illustration of what the build.sh script is doing:

From configuration files, over salt and Packer, to a finished AMI

The script takes config files passed to it as arguments, tells Packer to use them along with the data from secrets.json to fill in variables in a template file. The template in turn, is what Packer uses to know what and where to build. With each image type, a unique identifier is passed to Salt, which causes the right provisioning modules for the given AMI to be chosen and applied.

The actual building of the AMI happens on an AWS instance in the region of choice.

The Project Structure

Here is what the project tree looks like:

.
├── build.sh
├── logs
│   ├── stderr.log
│   └── stdout.log
├── README.md
├── salt
│   ├── docker.sls
│   ├── tools.sls
│   └── top.sls
├── secrets.json
├── set
│   ├── image
│   │   ├── base.json
│   │   └── docker.json
│   └── region
│       ├── eu_central_1.json
│       └── us_east_1.json
└── template.json

Setting Region- and Image-specific Variables

The set folder contains JSON configuration files which provide variables to the template. Those are divided into region and image files.

Here’s the content of the eu_east_1.json file:

{
    "aws_region": "us-east-1",
    "aws_source_ami": "ami-ffffffff",
    "aws_vpc_id": "vpc-ffffffff",
    "aws_subnet_id": "subnet-ffffffff"
}

For a given region, we tell Packer what base AMI to use, which VPC and what particular subnet to use for the one-off image-building instance.

The image file is even simpler. The docker one contains:

{
    "ami_name_prefix": "Docker AMI",
    "salt_minion_id": "docker"
}

It contains a prefix which is later used to create the timestamped AMI name, and a minion_id for salt.

secrets.json is what you would expect:

{
    "aws_access_key_id": "???",
    "aws_secret_access_key": "???"
}

To not commit it by accident, I usually tell git to ignore it:

$ git update-index --assume-unchanged secrets.json

The Template

Here is where the previous parts fit in. Variables from single config files are used to fill the gaps in the variables section. Those are used to configure a single amazon-ebs builder. Finally the salt minion id is passed to the salt provisioner.

{
    "variables": {
        "aws_access_key_id": "",
        "aws_secret_access_key": "",
        "ami_name_prefix": "",
        "salt_minion_id": "",
        "aws_region": "",
        "aws_source_ami": "",
        "aws_vpc_id": "",
        "aws_subnet_id": ""
    },
    "builders": [
    {
        "type": "amazon-ebs",
        "name": "{{user `aws_region`}} {{user `salt_minion_id`}}",
        "instance_type": "t2.small",
        "ssh_username": "ubuntu",
        "associate_public_ip_address": true,
        "access_key": "{{user `aws_access_key_id`}}",
        "secret_key": "{{user `aws_secret_access_key`}}",
        "region": "{{user `aws_region`}}",
        "source_ami": "{{user `aws_source_ami`}}",
        "vpc_id": "{{user `aws_vpc_id`}}",
        "subnet_id": "{{user `aws_subnet_id`}}",
        "ami_name": "{{user `ami_name_prefix`}} {{isotime \"2006-01-02 15-04-05\"}}"
    }
  ],

  "provisioners": [
    {
        "type": "salt-masterless",
        "local_state_tree": "salt",
        "bootstrap_args": "-i {{user `salt_minion_id`}}"
    }
  ]
}

The Provisioner

The salt folder, contains the usual top.sls file and further folders or files defining single configuration modules. Here is what the top.sls looks like:

base:
  '*':
    - tools
    - motd
  'docker':
    - docker

This causes dafault modules to be applied to everything and the docker module, if that’s the salt minion id.

The Script

The build.sh script is where everything comes together. It simply passes the two commandline arguments it expects for the region and image on to the Packer binary. As Salt is kind of terrible to debug from Packer terminal output alone, everything is also piped into files for later review.

#! /bin/bash
set -e

echo "Checking input parameters."
if [ "$#" -ne 2 ]; then
    echo "Usage: build.sh set/region/your-region.json set/image/image.json"
    exit 1
fi

region=$1
image=$2

echo "Running build."
mkdir -p logs
packer build -var-file secrets.json -var-file $region -var-file $image template.json > >(tee logs/stdout.log) 2> >(tee logs/stderr.log >&2)

In Conclusion

If you are working with multiple AWS AMIs and need to build the one or the other in a particular region from time to time, using this project setup will probably make your life a little more pleasant. That said, in the current form, it is rather simplistic and not geared towards stuff like running parallel build jobs or handling multiple platforms.

I’m rather happy with it for my humble needs, and sincerely hope that you can make use of this article either practically or as inspiration to customize your current workflows.

Want to be notified when I publish new content?

Just enter your name and email below. You will also get content that I share exclusively with the list. Zero spam!