Clouds signifying cloud computing

For the Review, Appraisal, and Triage of Mail (RATOM) project, funded by the Andrew W. Mellon Foundation, we were tasked with deploying to a Microsoft Azure environment. More details about the project are in our first blog post in this Learn With Us blog series. Caktus has experience with Amazon Web Services (AWS) and Google Cloud, but we hadn't had the opportunity to use Azure yet, so we looked forward to the opportunity to use that environment and document our experience. The entire deployment process is available on GitHub as a reference under the StateArchivesOfNorthCarolina/ratom-deploy repository.

We’ve used Infrastructure-as-Code tools like AWS CloudFormation with aws-web-stacks, but since this was our first exploration, we wanted to keep it simple so we could really learn the command-line interface (CLI).

In this post, we explore creating the following Azure services using the CLI:

Install the Azure CLI

First install the Azure CLI. On a Mac, you can run:

brew update && brew install azure-cli

To sign in, use the az login command. If the CLI can open your default browser, it will do so and load an Azure sign-in page. |

(Optional) Set default subscription

Azure subscriptions were confusing to me at first. While services like AWS aggregate billing for an entire account, Azure resources can be billed directly to specific subscriptions within an account. In fact, all resources created within Azure must be tied to a subscription. If you have multiple subscriptions, it's easier to set a default subscription so you don't have to associate it with each az command.

You can configure the default Azure subscription, used by the following az create requests, using:

az account set --subscription NAME_OR_ID

Create an Azure Kubernetes Service (AKS) Cluster

Azure resource groups are collections of resources. For our project, we created a RATOM resource group in the East US location using az group create:

export RESOURCE_GROUP=ratom-group
az group create \
    --name $RESOURCE_GROUP \
    --location eastus

Next, we created a new managed Azure Kubernetes Service (AKS) cluster using az aks create:

export CLUSTER_NAME=caktus-ratom
az aks create \
    --resource-group $RESOURCE_GROUP \
    --name $CLUSTER_NAME \
    --location eastus \
    --node-count 2 \
    --node-vm-size Standard_D2s_v3 \
    --enable-addons monitoring \
    --kubernetes-version 1.15.7

Azure Database for PostgreSQL

PostgreSQL is our preferred database and we wanted an Azure-managed instance. So we created an Azure Database for PostgreSQL server using az postgres server create:

export AZ_PGSERVER=ratomdb
export PGUSER=ratom
export PGPASSWORD=<password>
az postgres server create \
    --name $AZ_PGSERVER \
    --admin-user $PGUSER \
    --admin-password $PGPASSWORD \
    --resource-group $RESOURCE_GROUP \
    --sku-name B_Gen5_1 \
    --version 11 \
    --location eastus

While this was creating, we went ahead and configured our cluster. |

Configure Your Kubernetes Cluster

We installed kubectl, which we used to manage the Kubernetes cluster.

To configure kubectl to connect to your Kubernetes cluster, use az aks get-credentials:

az aks get-credentials --resource-group ratom-group --name $CLUSTER_NAME

This configures kubectl credentials to the cluster under the context defined in $CLUSTER_NAME. Verify the connection to your cluster with:

kubectl get nodes

You should see a list of nodes.

Next, we added the cluster ingress controller and certificate manager using caktus.k8s-web-cluster, an Ansible role maintained by Caktus.

You can review the variables in host_vars/caktus-ratom.yaml in the ratom-deploy repository for a working example. The most important variables are:

k8s_context: The name of your cluster context found in your .kube/config file. This will likely just be $CLUSTER_NAME. k8s_letsencrypt_email: The email address used for Let's Encrypt cert-related emails.

These will need to be configured specifically for your cluster.

Next add $CLUSTER_NAME to deploy/inventory and add it to the [k8s] group. Use the existing clusters as a reference.

Install it with:

ansible-playbook -l $CLUSTER_NAME deploy.yaml -vv

Test Let's Encrypt

During the installation, an Azure public IP address is created for the nginx ingress controller. It will take a few minutes to be assigned, but eventually you should see it with this command:

kubectl get service -n ingress-nginx

Add a DNS record for k8s_echotest_hostname to point to this IP address. Give the record a minute or two to propagate.

Now install the echo test server:

ansible-playbook -l $CLUSTER_NAME echotest.yaml -vv

Give the certificate a couple minutes to be generated and validated. While waiting, you can watch the output of:

kubectl -n echoserver get pod

When the cm-acme-http-solver pod goes away, the certificate should be validated. Now, navigate to k8s_echotest_hostname and ensure that you have a valid certificate.

To uninstall echotest, run:

ansible-playbook -l $CLUSTER_NAME echotest.yaml --extra-vars "k8s_echotest_state=absent" -vv

Add PostgreSQL Firewall Rules

Next, in order for our cluster to communicate with our database, connections into the PostgreSQL database server must be allowed from our Kubernetes IP address.

Save the outbound IP address of the kubernetes cluster to an environment variable. This is not the same IP address obtained from the ingress controller above. Find it in the Azure console and add it to your shell's environment:

export IP_ADDRESS=<ip-address>

Add a firewall rule to grant it access:

az postgres server firewall-rule create \
    --resource-group $RESOURCE_GROUP \
    --server-name $AZ_PGSERVER \
    --name k8s-cluster \
    --start-ip-address $IP_ADDRESS \
    --end-ip-address $IP_ADDRESS

You'll likely also want to, at least temporarily, grant access to your local IP to run a few SQL statements against it. Re-run the the above commands with this IP as well.

export IP_ADDRESS=<your-external-ip-address>

Add a firewall rule to grant it access:

az postgres server firewall-rule create \
    --resource-group $RESOURCE_GROUP \
    --server-name $AZ_PGSERVER \
    --name my-ip-address \
    --start-ip-address $IP_ADDRESS \
    --end-ip-address $IP_ADDRESS

Create Project PostgreSQL User and Database

Next, a PostgreSQL user and database must be created for the project. Obtain the FQDN with:

az postgres server show \
    --resource-group $RESOURCE_GROUP \
    --name $AZ_PGSERVER \
    | grep fullyQualifiedDomainName

Set a few more environment variables for psql to use. Azure requires the username to be in username@hostname format. This is an important note! I never used this format before for PostgreSQL and it was an initial gotcha that I ran into.

export PGSSLMODE=require
export PGUSER=$PGUSER@$AZ_PGSERVER  # Azure requirement
export PGHOST=<fqdn>

Now you should be able to connect directly to the default postgres database:

psql postgres

Create a database and user. Adjust these parameters to your needs:


Setting up an Azure Environment is Straightforward

Now we have an Azure Kubernetes cluster with a managed PostgreSQL service for our RATOM project. Overall it was fairly easy to set up and get everything running within a few hours. The experience was on par with the AWS/Google Cloud counterparts and should be straightforward to pick up for anyone who has provisioned resources in these environments.

New Call-to-action
blog comments powered by Disqus



You're already subscribed