Automating Dokku Setup with AWS Managed Services

Dokku is a great little tool. It lets
you set up your own virtual machine (VM) to facilitate quick and easy
Heroku-like deployments through a git push
command. Builds are fast,
and updating environment variables is easy. The problem is that Dokku
includes all of your services on a single instance. When you run your
database on the Dokku instance, you risk losing it (and any data that's
not yet backed up) should your VM suddenly fail.
Enter Amazon Web Services (AWS). By creating your database via Amazon's Relational Database Service (RDS), you get the benefit of simple deploys along with the redundancy and automated failover that can be set up with RDS. AWS, of course, includes other managed services that might help reduce the need to configure and maintain extra services on your Dokku instance, such as ElastiCache and Elasticsearch.
I've previously written about managing your AWS container
infrastructure with
Python
and described a new project I'm working on called AWS Web
Stacks. Sparked by some
conversations with colleagues at the Caktus office, I began wondering if
it would be possible to use a Dokku instance in place of Elastic
Beanstalk (EB) or Elastic Container Service (ECS) to help simplify
deployments. It turns out that it is not only possible to use Dokku in
place of EB or ECS in a CloudFormation stack, but doing so speeds up
build and deployment times by an order of magnitude, all while
substituting a simple, open source tool for what was previously a
vendor-specific resource. This "CloudFormation-aware" Dokku instance
accepts inputs via CloudFormation parameters, and watches the
CloudFormation stack for updates to resources that might result in
changes to its environment variables (such as DATABASE_URL
).
The full code (a mere 277 lines as of the time of this post) is available on GitHub, but I think it's helpful to walk through it section by section to understand exactly how CloudFormation and Dokku interact. The original code and the CloudFormation templates in this post are written in troposphere, a library that lets you create CloudFormation templates in Python instead of writing JSON manually.
First, we create some parameters so we can configure the Dokku instance when the stack is created, rather than opening up an HTTP server to the public internet.
key_name = template.add_parameter(Parameter(
"KeyName",
Description="Name of an existing EC2 KeyPair to enable SSH access to "
"the AWS EC2 instances",
Type="AWS::EC2::KeyPair::KeyName",
ConstraintDescription="must be the name of an existing EC2 KeyPair."
))
dokku_version = template.add_parameter(Parameter(
"DokkuVersion",
Description="Dokku version to install, e.g., \"v0.10.4\" (see "
"https://github.com/dokku/dokku/releases).",
Type="String",
Default="v0.10.4",
))
dokku_web_config = template.add_parameter(Parameter(
"DokkuWebConfig",
Description="Whether or not to enable the Dokku web config "
"(defaults to false for security reasons).",
Type="String",
AllowedValues=["true", "false"],
Default="false",
))
dokku_vhost_enable = template.add_parameter(Parameter(
"DokkuVhostEnable",
Description="Whether or not to use vhost-based deployments "
"(e.g., foo.domain.name).",
Type="String",
AllowedValues=["true", "false"],
Default="true",
))
root_size = template.add_parameter(Parameter(
"RootVolumeSize",
Description="The size of the root volume (in GB).",
Type="Number",
Default="30",
))
ssh_cidr = template.add_parameter(Parameter(
"SshCidr",
Description="CIDR block from which to allow SSH access. Restrict "
"this to your IP, if possible.",
Type="String",
Default="0.0.0.0/0",
))
Next, we create a mapping that allows us to look up the correct AMI for the latest Ubuntu 16.04 LTS release by AWS region:
template.add_mapping('RegionMap', {
"ap-northeast-1": {"AMI": "ami-0417e362"},
"ap-northeast-2": {"AMI": "ami-536ab33d"},
"ap-south-1": {"AMI": "ami-df413bb0"},
"ap-southeast-1": {"AMI": "ami-9f28b3fc"},
"ap-southeast-2": {"AMI": "ami-bb1901d8"},
"ca-central-1": {"AMI": "ami-a9c27ccd"},
"eu-central-1": {"AMI": "ami-958128fa"},
"eu-west-1": {"AMI": "ami-674cbc1e"},
"eu-west-2": {"AMI": "ami-03998867"},
"sa-east-1": {"AMI": "ami-a41869c8"},
"us-east-1": {"AMI": "ami-1d4e7a66"},
"us-east-2": {"AMI": "ami-dbbd9dbe"},
"us-west-1": {"AMI": "ami-969ab1f6"},
"us-west-2": {"AMI": "ami-8803e0f0"},
})
The AMIs can be located manually via https://cloud-images.ubuntu.com/locator/ec2/, or programmatically via the JSON-like data available at https://cloud-images.ubuntu.com/locator/ec2/releasesTable.
To allow us to access other resources (such as the S3 buckets and CloudWatch Logs group) created by AWS Web Stacks we also need to set up an IAM instance role and instance profile for our Dokku instance:
instance_role = iam.Role(
"ContainerInstanceRole",
template=template,
AssumeRolePolicyDocument=dict(Statement=[dict(
Effect="Allow",
Principal=dict(Service=["ec2.amazonaws.com"]),
Action=["sts:AssumeRole"],
)]),
Path="/",
Policies=[
assets_management_policy, # defined in assets.py
logging_policy, # defined in logs.py
]
)
instance_profile = iam.InstanceProfile(
"ContainerInstanceProfile",
template=template,
Path="/",
Roles=[Ref(instance_role)],
)
Next, let's set up a security group for our instance, so we can limit SSH access only to our IP(s) and open only ports 80 and 443 to the world:
security_group = template.add_resource(ec2.SecurityGroup(
'SecurityGroup',
GroupDescription='Allows SSH access from SshCidr and HTTP/HTTPS '
'access from anywhere.',
VpcId=Ref(vpc),
SecurityGroupIngress=[
ec2.SecurityGroupRule(
IpProtocol='tcp',
FromPort=22,
ToPort=22,
CidrIp=Ref(ssh_cidr),
),
ec2.SecurityGroupRule(
IpProtocol='tcp',
FromPort=80,
ToPort=80,
CidrIp='0.0.0.0/0',
),
ec2.SecurityGroupRule(
IpProtocol='tcp',
FromPort=443,
ToPort=443,
CidrIp='0.0.0.0/0',
),
]
))
Since EC2 instances themselves are ephemeral, let's create an Elastic IP that we can keep assigned to our current Dokku instance, in the event the instance needs to be recreated for some reason:
eip = template.add_resource(ec2.EIP("Eip"))
Now for the EC2 instance itself. This resource makes up nearly half the template, so we'll take it section by section. The first part is relatively straightforward. We create the instance with the correct AMI for our region; the instance type, SSH public key, and root volume size configured in the stack parameters; and the security group, instance profile, and VPC subnet we defined elsewhere in the stack:
ec2_instance_name = 'Ec2Instance'
ec2_instance = template.add_resource(ec2.Instance(
ec2_instance_name,
ImageId=FindInMap("RegionMap", Ref("AWS::Region"), "AMI"),
InstanceType=container_instance_type,
KeyName=Ref(key_name),
SecurityGroupIds=[Ref(security_group)],
IamInstanceProfile=Ref(instance_profile),
SubnetId=Ref(container_a_subnet),
BlockDeviceMappings=[
ec2.BlockDeviceMapping(
DeviceName="/dev/sda1",
Ebs=ec2.EBSBlockDevice(
VolumeSize=Ref(root_size),
)
),
],
# ...
Tags=Tags(
Name=Ref("AWS::StackName"),
),
)
Next, we define a CreationPolicy that allows the instance to alert CloudFormation when it's finished installing Dokku:
ec2_instance = template.add_resource(ec2.Instance(
# ...
CreationPolicy=CreationPolicy(
ResourceSignal=ResourceSignal(
Timeout='PT10M', # 10 minutes
),
),
# ...
)
The UserData
section defines a script that is run when the instance is
initially created. This is the only time this script is run. In it, we
install the CloudFormation helper
scripts,
execute a set of scripts that we define later, and signal to
CloudFormation that the instance creation is finished:
ec2_instance = template.add_resource(ec2.Instance(
# ...
UserData=Base64(Join('', [
'#!/bin/bash\n',
# install cfn helper scripts
'apt-get update\n',
'apt-get -y install python-pip\n',
'pip install https://s3.amazonaws.com/cloudformation-examples/'
'aws-cfn-bootstrap-latest.tar.gz\n',
'cp /usr/local/init/ubuntu/cfn-hup /etc/init.d/cfn-hup\n',
'chmod +x /etc/init.d/cfn-hup\n',
# don't start cfn-hup yet until we install cfn-hup.conf
'update-rc.d cfn-hup defaults\n',
# call our "on_first_boot" configset (defined below):
'cfn-init --stack="', Ref('AWS::StackName'), '"',
' --region=', Ref('AWS::Region'),
' -r %s -c on_first_boot\n' % ec2_instance_name,
# send the exit code from cfn-init to our CreationPolicy:
'cfn-signal -e $? --stack="', Ref('AWS::StackName'), '"',
' --region=', Ref('AWS::Region'),
' --resource %s\n' % ec2_instance_name,
])),
# ...
)
Finally, in the MetaData
section, we define a set of cloud-init
scripts that (a) install Dokku, (b) configure global Dokku environment
variables with the environment variables based on our stack (e.g.,
DATABASE_URL
, CACHE_URL
, ELASTICSEARCH_ENDPOINT
, etc.), (c)
install some configuration files needed by the
cfn-hup
service, and (d) start the cfn-hup
service:
ec2_instance = template.add_resource(ec2.Instance(
# ...
Metadata=cloudformation.Metadata(
cloudformation.Init(
cloudformation.InitConfigSets(
on_first_boot=['install_dokku', 'set_dokku_env', 'start_cfn_hup'],
on_metadata_update=['set_dokku_env'],
),
install_dokku=cloudformation.InitConfig(
commands={
'01_fetch': {
'command': Join('', [
'wget https://raw.githubusercontent.com/dokku/dokku/',
Ref(dokku_version),
'/bootstrap.sh',
]),
'cwd': '~',
},
'02_install': {
'command': 'sudo -E bash bootstrap.sh',
'env': {
'DOKKU_TAG': Ref(dokku_version),
'DOKKU_VHOST_ENABLE': Ref(dokku_vhost_enable),
'DOKKU_WEB_CONFIG': Ref(dokku_web_config),
'DOKKU_HOSTNAME': domain_name,
# use the key configured by key_name
'DOKKU_KEY_FILE': '/home/ubuntu/.ssh/authorized_keys',
# should be the default, but be explicit just in case
'DOKKU_SKIP_KEY_FILE': 'false',
},
'cwd': '~',
},
},
),
set_dokku_env=cloudformation.InitConfig(
commands={
'01_set_env': {
# redirect output to /dev/null so we don't write
# environment variables to log file
'command': 'dokku config:set --global {} >/dev/null'.format(
' '.join(['=$'.join([k, k]) for k in dict(environment_variables).keys()]),
),
'env': dict(environment_variables),
},
},
),
start_cfn_hup=cloudformation.InitConfig(
commands={
'01_start': {
'command': 'service cfn-hup start',
},
},
files={
'/etc/cfn/cfn-hup.conf': {
'content': Join('', [
'[main]\n',
'stack=', Ref('AWS::StackName'), '\n',
'region=', Ref('AWS::Region'), '\n',
'umask=022\n',
'interval=1\n', # check for changes every minute
'verbose=true\n',
]),
'mode': '000400',
'owner': 'root',
'group': 'root',
},
'/etc/cfn/hooks.d/cfn-auto-reloader.conf': {
'content': Join('', [
# trigger the on_metadata_update configset on any
# changes to Ec2Instance metadata
'[cfn-auto-reloader-hook]\n',
'triggers=post.update\n',
'path=Resources.%s.Metadata\n' % ec2_instance_name,
'action=/usr/local/bin/cfn-init',
' --stack=', Ref('AWS::StackName'),
' --resource=%s' % ec2_instance_name,
' --configsets=on_metadata_update',
' --region=', Ref('AWS::Region'), '\n',
'runas=root\n',
]),
'mode': '000400',
'owner': 'root',
'group': 'root',
},
},
),
),
),
# ...
)
The install_dokku
and start_cfn_hup
scripts are configured to run
only the first time the instance boots, whereas the set_dokku_env
script is configured to run any time any metadata associated with the
EC2 instance changes.
Want to give it a try? Before creating a stack, you'll need to upload
your SSH public key to the Key Pairs section of the AWS console so you
can select it via the KeyName
parameter. Click the Launch Stack button
below to create your own stack on AWS. For help filling in the
CloudFormation parameters, refer to the Specify Details section of
the post on managing your AWS container infrastructure with
Python.
If you create a new account to try it out, or if your account is less
than 12 months old and you're not already using free tier resources,
the default instance types in the stack should fit within the free tier,
and unneeded services can be disabled by selecting (none)
for the
instance type.
Once the stack is set up, you can deploy to it as you would to any Dokku instance (or to Heroku proper):
ssh dokku@<your domain or IP> apps:create python-sample
git clone https://github.com/heroku/python-sample.git
cd python-sample
git remote add dokku dokku@<your domain or IP>:python-sample
git push dokku master
Alternatively, fork the aws-web-stacks repo on GitHub and adjust it to suit your needs. Contributions welcome.
Good luck and have fun!