This page contains the instructions and general information for my OSCON 2016 tutorial:
Don't fix it. Throw it away! Introduction to disposable infrastructure
Please complete this survey after the course! Thanks!
The following are the prerequisites for the course. PLEASE come to the course with all of this sorted out. Don't hesitate to contact me with questions. It's a short lesson, so we don't have time to spend playing IT admin.
Something to hack on. Preferably a laptop.
You are free to choose what you'd like to run this software on. It will be much easier for both of us if you're on OS X or Linux though, as I haven't touched Windows in years. If you're using a VM, make sure it can reach the Internet. Note that I'll be doing the tutorial on a Mac.
If you're on the OSCON Wifi, you can download these from the local OSCON server: http://172.16.0.20/oscon/DorrosChris/. For the Vagrant box add, you can run this:
vagrant box add ubuntu/trusty64 \
http://172.16.0.20/oscon/DorrosChris/trusty-server-cloudimg-amd64-vagrant-disk1.box
Install the latest version of this software:
If you're on a Mac, note that most of these are available via 'brew':
brew cask install virtualbox
brew cask install vagrant
brew install packer
brew install terraform
Install the Ubuntu Trusty 64 box. This one is important to do before the session, as it's a large OS image file, and we don't want to overload the conference WiFi. (DoS Techniques is a different course number)
vagrant box add ubuntu/trusty64
We'll use Puppet during the tutorial, but it isn't necessary to install this on your laptop as the HashiCorp tools support this.
Active AWS account
github.com (optional but recommended)
vagrant box add ubuntu/trusty64
cd terraform
export AWS_ACCESS_KEY_ID='youraccesskeyid'
export AWS_SECRET_ACCESS_KEY='yoursecretkey'
terraform plan
terraform apply
If this fails due to IP CIDR conflict, navigate in the GUI to "VPC->Your VPC" and "VPC->Subnets" and look for a free RFC-1918 range (172.16.0.0/12, 10.0.0.0/8, 192.168.0.0/16). Make sure the subnet is actually a subnet of the VPC range (in my config I'm using 10.0.0.0/16 for the VPC and 10.0.1.0/24 and 10.0.2.0/24 as the two subnets.STOP HERE! We'll cover sections 1-4 during the tutorial.
In this step we are going to spin up a new Vagrant box, using Ubuntu 14.04, and deploy a web server (nginx) via Puppet.
vagrant init ubuntu/trusty64
config.vm.network "forwarded_port", guest: 80, host: 8080
vagrant up && vagrant ssh
exit
out of the SSH sessionOpen the Vagrantfile and add the Puppet provisioning steps:
config.vm.provision "puppet" do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "app.pp"
puppet.module_path = "puppet/modules"
end
Apply the Puppet module
vagrant destroy && vagrant up
Are you sure you want to destroy the 'default' VM? [y/N] y
From outside VM: curl localhost:8080
You should see the default nginx page
Done! Let's tear down the Vagrant instance for now
vagrant destroy
cd [gh_repo]/packer
Set AWS credentials into environment variables (if not already done)
export AWS_ACCESS_KEY_ID='youraccesskeyid'
export AWS_SECRET_ACCESS_KEY='yoursecretkey'
If you're using Windows, you can add these directly to the variables.tf file see example here. Be careful though, you don't want to check this into source control!
Let's test out our AWS setup by building a plain Ubuntu 14.04 AMI
packer validate -var-file=variables.json app.json
packer build -var-file=variables.json app.json
This should spit out an AMI ID at the end if successful.
Now we tell Packer to apply our Puppet config during build.
Add the following to variables.json: (mind the commas)
"puppet_manifest_file": "../puppet/manifests/app.pp"
Update the contents of the app.json file to look like the code below. Notice we added the "provisioners" block.
{
"builders": [
{
"type": "amazon-ebs",
"region": "{{user `aws_region`}}",
"vpc_id": "{{user `aws_vpc_id`}}",
"subnet_id": "{{user `aws_subnet_id`}}",
"source_ami": "{{user `ubuntu_1404_ami`}}",
"instance_type": "{{user `aws_instance_type`}}",
"ssh_username": "{{user `aws_ssh_username`}}",
"ami_name": "{{user `ami_prefix`}}-{{timestamp}}",
"iam_instance_profile": "{{user `iam_instance_profile`}}"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 10",
"sudo apt-get update",
"sudo apt-get install -y puppet"
]
},
{
"type": "puppet-masterless",
"manifest_file": "{{user `puppet_manifest_file`}}",
"module_paths": [
"../puppet/modules"
]
}
]
}
Build a new AMI image
packer validate -var-file=variables.json app.json
packer build -var-file=variables.json app.json
This will spit out an AMI ID at the end (ami-xxxxxxxx). Save this! Note down that it's the "default nginx image"
$ cd [gh_repo]/terraform
export AWS_ACCESS_KEY_ID='youraccesskeyid'
export AWS_SECRET_ACCESS_KEY='yoursecretkey'
Copy over the infrastructure files:
cp 3/* .
Open the "ec2_instances.tf" file and update the ami for the "app-1" instance to the AMI ID you saved during the last Packer build.
Run Terraform in NOOP mode
terraform plan
Build our infrastructure!
terraform apply
Once the Terraform run in complete, note the public IP of the app-1 instance outputted. (this can also be found in the terraform.tfstate file or the EC2 section of the AWS GUI)
After waiting a few minutes for the instance to boot up (you'll get connection refused until it's finished):
curl [public_ip]
Note down the DNS name of the ELB (elastic load balancer) (elb_fqdn) in the Terraform output
This should return no content, since we haven't added the instance behind the load balancer yet:
curl [dns_name]
Note that curl -v
would show a 503 (service unavailable) response. Also note you may get a DNS resolution error until the new record is propogated.
Add the instance to the ELB by editing "elb.tf". In the "instances" array, add: "${aws_instance.app-1.id}"
Apply our changes
terraform plan
,
terraform apply
Try curling the ELB again. Note that it will take some time for this to work, as the instance needs to be healthy for a set period before the load balancer will direct traffic to it. (you could run this under the watch
command if you have that installed)
curl [dns_name]
Our website is pretty boring. Let's change the content up. We'll now get to exercise the entire deployment workflow we've just built!
cd [gh_repo]/puppet/modules/app/manifests
Create a new file to hold our web server config named "config.pp" with the following content
# config for our OSCON demo app
#
class app::config {
file { "/etc/nginx/sites-available/default":
source => "puppet:///modules/${module_name}/default-proxy",
mode => "0644",
owner => "root",
group => "root",
ensure => present,
notify => Service['nginx'],
;
}
}
Add the new config to the "init.pp" file
include app::config
Note that the new configuration is already in the Puppet directory, we just weren't using it. To see it:
cat ../files/default-proxy
Test out the changes locally in Vagrant:
cd [gh_repo] #(base)
vagrant up
curl localhost:8080
You should see a weather forecast for Austin, TX
Bulid a new AMI using Packer:
cd packer
packer validate -var-file=variables.json app.json
packer build -var-file=variables.json app.json
Save the AMI ID! Note down that it's the "weather app image"
cd ../terraform
Create a new instance in the Terraform "ec2_instance.tf" config by copy/pasta the app-1 block to an app-2 block (including the "output"). Bump the version number too. Use the new AMI ID for this one.
Spin up the new app version instance
terraform plan
terraform apply
Grab the public IP address of your new instance from the tfstate file or the GUI, and curl it to ensure it's functioning properly
Add the new instance to the ELB (elb.tf, instances array)
Open a new terminal to watch the ELB, if you don't already have one open
watch curl [elb_dns_name]
terraform plan
, terraform apply
(you know the drill)
In the curl/watch window, you should see some switching of responses, based on which instance the ELB forwarded your request to. (keepalives, HTTP/2, DNS, and some other technicalities can make this quite variable, so don't worry so much if you don't see much exciting here)
Remove the old (app-1) instance from the ELB (elb.tf)
terraform plan
, terraform apply
Eventually your curl/watch window should ONLY be showing the new, weather app (may take a few minutes). Note the the weather app doesn't display nicely underneath "watch" - run a regular curl to see it in all its glory.
UPGRADE DONE! Notice that we didn't touch any existing production instance to make this change. We didn't even SSH into an EC2 instance once during this whole upgrade!
If you want to cleanup everything we created in Terraform during the tutorial, just run terraform destroy