Solenberg.dev

An Introduction to Hashicorp's Packer

May 15, 2020

What is Packer?

Hashicorp’s Packer is an excellent open-source tool that allows you to configure a machine image to your exact requirements in code. You can do simple things like having Packer install all current updates or loading complex scripts that deploy your full application. For this article, I am using Packer to create an EC2 AMI in AWS.

Why Would I Use Packer?

There are a variety of reasons I use Packer. Typically, I use Packer to produce golden images to share across my organization. This way, everyone is using the same standard base image with a few tools we require already installed. I have also used it to configure a base image for a huge web application that was known to take a long time to provision before the load balancer could start sending traffic. In my current role, we use Packer combined with AWS’s CodePipeline and CodeDeploy to do monthly patching of our base image and then share it out to all the accounts in our organization. Combined with some other tools, these images get pulled into auto scaling configs automagically so our instances can stay updated.

How do I use Packer?

Packer files use JSON format, so I find them straight forward to read and write.

Here is a sample file from the Packer documentation building an EBS backed Ubuntu instance:

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [
    {
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
          "root-device-type": "ebs"
        },
        "owners": ["099720109477"],
        "most_recent": true
      },

      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "packer-example {{timestamp}}",

      "region": "us-east-1",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}"
    }
  ]
}

Let’s break it down

First:

"variables": {
  "aws_access_key": "",
  "aws_secret_key": ""
},

This section is pretty straightforward. It is where you would define things you may want to use with a CI environment or changing variables.

NOTE: I strongly urge against hardcoding your aws_access_key and aws_secret_key in the file. Your CI platform should be able to load these as environment variables. For local development, there are many tools for AWS credential management.

Next:

"builders": [
  {
    "source_ami_filter": {
      "filters": {
        "virtualization-type": "hvm",
        "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
        "root-device-type": "ebs"
      },
      "owners": ["099720109477"],
      "most_recent": true
    },

    "instance_type": "t2.micro",
    "ssh_username": "ubuntu",
    "ami_name": "packer-example {{timestamp}}",

    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "region": "us-east-1",

  }
]

The source_ami_filter block is looking for a base image available on the AWS Marketplace.

You can find all of this information with the aws-cli using aws ec2 describe-images and passing arguments to narrow down your search. See AWS’s documentation for more information. Some AMI’s may require you to subscribe to the image in the Marketplace before Packer can use it.

The remaining keys are used for AMI configuration and may be specific to the base AMI you chose. For instance, if you’re using an Amazon Linux AMI, you would need to change the ssh_username to ec2-user. You need to look at these details on the marketplace link for your base image. You can find more information about all the possible parameters for the EBS builder in the documentation.

Some keys like ami_name must be unique, so using {{timestamp}} provides a unique value every time you run Packer. If you didn’t have a unique name, the build would fail before it starts due to namespace collisions. I use the {{isotime \"2006-01-02 03:04:05\"}} format because it’s more human-readable.

That’s great, now I have a copy of an image that’s already available. What’s the point?

I’m glad you asked! The sample I showed was missing the real bread and butter of what Packer can do. The real fun comes with the provisioners block. The provisioners block is where you can get fancy with your configurations, from things as simple as running an update command, loading up a bootstrap bash script, or running playbooks from configuration tools like chef, ansible, or saltstack. More information can be found in the provisioners documentation.

A quick example based on our sample configuration:

{
  "variables": { ... },
  "builders": [ ... ],
  "provisioners": [
    {
      "type": "shell",
      inline: [
        "echo '\n ---===### Updating System ###===--- \n'",
        "sudo apt-get update",
        "sudo apt-get upgrade -y",
        "echo '\n ---===### Rebooting for System Updates ###===--- \n'",
        "sudo reboot"
      ],
      "expect_disconnect": true
    }
  ]
}

In my example, I gave a simple inline shell script. Packer runs each line after the system comes online.

I recently discovered that, while we were updating the instances, the kernel wasn’t updating. When we would spin up a new instance, it would show that the new kernel is loaded, but the instance would still be using the previous kernel version. To understand what happened, we have to understand how Packer works. Packer creates a machine image. The image is stopped, a snapshot is taken, then the snapshot gets stored for later use. The machine image itself is never rebooted to allow loading the new kernel. If you’re running updates or anything that requires a reboot, make sure you set "expect_disconnect": true. Packer watches for the disconnect, retries the connection, and then continues the build. Packer times out and reports an error on the build if it cannot reconnect to the machine.

I like to put in echo statements so I can glance over the output logs and find sections quickly.

Conclusion

This article was a pretty quick introduction to what Packer can do. Utilizing Packer to update systems or create pre-configured images in combination with a CI/CD platform can save your team valuable time when performing routine maintenance.


Written by R.James Solenberg who is an AWS Solutions Architect and Linux Sys Admin living in Indianapolis, Indiana. Follow him on Twitter