Update an Amazon EKS cluster Kubernetes version to 1.23+

Introduction

Caktus is committed to developing and maintaining high-quality web applications for clients, so we focus on software sustainability. Software sustainability refers to the ability of software to continue to function as expected over time, even as hardware and software environments change. It involves maintaining and updating software to remain reliable, secure, and compatible with new hardware and software technologies. Additionally, from a security perspective, attackers can exploit software vulnerabilities to gain unauthorized access to systems or data. Keeping software up-to-date with the latest security patches helps to reduce the risk of these vulnerabilities being exploited.

Our current task involved upgrading Kubernetes clusters. If you run a Kubernetes 1.22 cluster in EKS, you've likely seen a message this message about the Amazon EBS CSI driver:

The Container Storage Interface (CSI) migration feature offloads management operations of persistent volumes provisioned with the in-tree EBS storage plugin to the Amazon EBS CSI driver. This feature is enabled by default in Amazon EKS version 1.23 and later. If you are using EBS volumes in your cluster, then you must install the Amazon EBS CSI driver before updating your cluster to version 1.23 to avoid interruptions to your workloads.

The Amazon EKS Storage documentation provides additional information:

The existing in-tree Amazon EBS plugin is still supported, but by using a CSI driver, you benefit from the decoupling of Kubernetes upstream release cycle and CSI driver release cycle. Eventually, the in-tree plugin will be discontinued in favor of the CSI driver.

We will install the Amazon EBS CSI driver as an Amazon EKS add-on and upgrade an Amazon EKS cluster's Kubernetes version from 1.22 to 1.23 below.

Install the Amazon EBS CSI driver

First, install the Amazon EBS CSI driver as an Amazon EKS add-on:

  1. First, install the eksctl command line tool. If you're on an Apple Silicon Mac like me, you can run the following:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Darwin_arm64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
  1. Define your cluster name and region variables:
export CLUSTER=pressweb-stack-cluster
export AWS_REGION=us-west-2
  1. Create an IAM OIDC Provider:
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER --approve
aws eks describe-cluster --region $AWS_REGION --name $CLUSTER --output json | grep issuer
  1. Once you have the IAM OIDC Provider associated with your cluster, create an IAM role bound to a service account:
eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster $CLUSTER \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EBS_CSI_DriverRole
  1. Add the Amazon EBS CSI add-on:
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq --raw-output ".Account")
eksctl create addon \
    --name aws-ebs-csi-driver \
    --cluster $CLUSTER \
    --service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
    --force
  1. Finally, watch the status of the add-on and wait for it to be ACTIVE:
eksctl get addon --name aws-ebs-csi-driver --cluster $CLUSTER

Test the Amazon EBS CSI driver

Once the add-on is installed, it still needs to be enabled in the cluster. To test it, we'll follow Deploy a sample application and verify that the CSI driver is working, and apply a set of YAML manifests:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim
  1. Save this YAML to a file and apply it:
❯ kubectl apply -f csi.yaml
storageclass.storage.k8s.io/ebs-sc created
persistentvolumeclaim/ebs-claim created
pod/app created
  1. Take note of the new storageclass:
❯ kubectl get storageclass
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
ebs-sc          ebs.csi.aws.com         Delete          WaitForFirstConsumer   false                  3m3s
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  7d23h
  1. If it doesn't work, you'll see an error like this:
❯ kubectl describe pvc ebs-claim
Name:          ebs-claim
Namespace:     default
StorageClass:  ebs-sc
Status:        Pending
Volume:
Labels:        <none>
Annotations:  volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
              volume.kubernetes.io/selected-node: ip-10-0-15-208.ec2.internal
              volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       app
Events:
  Type     Reason                Age                  From                                                                                     Message
  ----     ------                ----                 ----                                                                                     -------
  Normal   WaitForFirstConsumer  3m30s                persistentvolume-controller                                                              waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    77s (x6 over 3m20s)  ebs.csi.aws.com_ebs-csi-controller-d8fdbc647-8qj5m_7192e285-1d50-4c74-b53d-2536e2265229  failed to provision volume with StorageClass "ebs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   Provisioning          13s (x8 over 3m30s)  ebs.csi.aws.com_ebs-csi-controller-d8fdbc647-8qj5m_7192e285-1d50-4c74-b53d-2536e2265229  External provisioner is provisioning volume for claim "default/ebs-claim"
  Normal   ExternalProvisioning  8s (x15 over 3m30s)  persistentvolume-controller                                                              waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
  Warning  ProvisioningFailed    3s (x2 over 3m9s)    ebs.csi.aws.com_ebs-csi-controller-d8fdbc647-8qj5m_7192e285-1d50-4c74-b53d-2536e2265229  failed to provision volume with StorageClass "ebs-sc": rpc error: code = Internal desc = Could not create volume "pvc-848fecb4-6810-4ec0-b264-34d37065fd40": could not create volume in EC2: RequestCanceled: request context canceled

You'll need to double check that OIDC provider is associated with your Kubernetes cluster and that the role name and ARN match.

  1. Once complete, the volume claim will be Bound:
❯ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ebs-claim   Bound    pvc-848fecb4-6810-4ec0-b264-34d37065fd40   4Gi        RWO            ebs-sc         21m
  1. Take note of the VolumeHandle below and you should be able to find the same-named EBS volume in the AWS EC2 console.
❯ kubectl describe pv pvc-848fecb4-6810-4ec0-b264-34d37065fd40
Name:              pvc-848fecb4-6810-4ec0-b264-34d37065fd40
Labels:            <none>
Annotations:      pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
                  volume.kubernetes.io/provisioner-deletion-secret-name:
                  volume.kubernetes.io/provisioner-deletion-secret-namespace:
Finalizers:        [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass:      ebs-sc
Status:            Bound
Claim:             default/ebs-claim
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          4Gi
Node Affinity:
  Required Terms:
    Term 0:        topology.ebs.csi.aws.com/zone in [us-east-1b]
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            ebs.csi.aws.com
    FSType:            ext4
    VolumeHandle:      vol-0a2488d454fbf92e4
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1677621314496-8081-ebs.csi.aws.com
Events:                <none>
  1. Finally, delete the resources:
❯ kubectl delete -f csi.yaml

And that's it! You should now have a 1.23+ Kubernetes cluster with the Amazon EBS CSI driver. Hopefully this walkthrough was helpful!

New Call-to-action
blog comments powered by Disqus
Times
Check

Success!

Times

You're already subscribed

Times