Terraform, Ansible, sticky tape and magic
This code was extracted from Cognician's 3rd-gen AWS infrastructure on Oct 1 2016.
Cognician's codebase is still very much a work in progress :-)
The overall design decisions are:
Given that it is extracted, it's a mix of the things Cognician needs. In no particular order, those are:
I wanted to give back. Several folks in the community really helped me get up to speed, either through their writing or through answering many questions. Check them out.
#terraform
on https://hangops.slack.com/ - so many folks helped!Also, I strongly believe in Terraform and I can clearly see the leverage it produces, and want to make it easy for others to see it and adopt it.
Finally, I'm really hoping some folks are going to tell me how wrong I am doing things, so that I can learn :-)
I'm sharing this 'as is'. I make no guarantees of maintenance of this code. Use at your own risk.
Just saying!
Sets up AWS credentials for aws cli and the rest of the tools.
Install AWS CLI.
brew install awscli
This will ensure that the environments you work with are in your ~/.aws/
files.
Note that the default
profile is empty; this is intentional. We'll declare a profile in our terraform files.
~/.aws/config
:
[default]
region = us-west-2
[profile cgn-master]
region = us-west-2
[profile cgn-staging]
role_arn = arn:aws:iam::SUB-ACCOUNT-ID:role/SUB-ACCOUNT-ROLE
source_profile = cgn-master
region = us-west-2
~/.aws/credentials
:
[default]
[cgn-master]
aws_access_key_id = ...
aws_secret_access_key = ...
Test that AWS is set up by calling aws ec2 describe-instances
with a --profile cgn-???
arg, which prints info about the user you're authenticated as.
Terraform manages AWS infrastructure - IAM users, S3 buckets, EC2 scaling groups, etc.
Install Terraform:
brew install terraform
For each _[environment]
, go into each one and run make remote
:
cd _staging
make remote
This will allow you to download the current Terraform state for that env from S3.
Verify that it's working with make plan
:
make plan
You should see Terraform do some work and then declare that there are no differences between what you have and what's running.
Packer builds AMIs (Amazon Machine Images) for our EC2 instances to use.
Install Packer:
brew install packer
Ansible configures our instances for specific tasks e.g. Zookeeper or one of our apps.
Install Ansible:
brew install python
pip install ansible
Install Ansible Dynamic Inventory for EC2:
mkdir -p /etc/ansible
cp playbooks/inventory/ec2.py /etc/ansible/hosts
cp playbooks/inventory/ec2.ini /etc/ansible/ec2.ini
For staging, put this into ~/.ssh/config
:
Host *
UseRoaming no
ControlPath ~/.ssh/cm-%r@%h:%p
ControlMaster auto
ControlPersist 10m
ForwardAgent yes
Port 22
Host b.cgn.fyi
HostName b.cgn.fyi
User ubuntu
IdentityFile ~/.ssh/your-ec2-ssh-key-for-that-env
Host 10.1.*
ProxyCommand ssh -W %h:%p [email protected]
User ubuntu
IdentityFile ~/.ssh/your-ec2-ssh-key-for-that-env
AMIs are packed per environment. We may centralise them in the future.
cd _staging
make pack
This will pack _staging/amis/cgn-base.json
, and eventually produce a new ami-xxxxxxxx
value for you to place in _staging/_staging.tfvars
-> *_ami
values.
Terraform uses a two-phase approach:
cd _staging
make remote # you only need to do this once
make plan
This will assess what's live and compare it to your state, and come up with a plan to apply (which it will store in ./proposed.plan
.
Assuming the output matches your intentions, apply the plan:
make apply
Once you are returned to the prompt, your changes are live — although some EC2 provisioning may still be in progress.
cd _staging
./gen-ansible-vars.py
This will populate playbooks/group_vars/all.yml
with values produced
by Terraform output.
When using the SSH config described above, you can use this to get a list of IPs for an app to SSH into.
cd _staging
make instance-ips
terraform.tfstate
is present: cd _staging && make remote
.terraform output
) with python update-circleci-env.py
.bash upload-ansible-playbooks.sh
.