Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
December 12, 2022 12:16 am GMT

Provisioning a Persistent EBS-backed Storage on Amazon EKS using Helm

Deploying stateful applications on kubernetes can pose a lot of complexities. In this demo, we will deploy a postgres database to AWS Elastic Kubernetes Service(EKS) and configure its persistence on Amazon Elastic Block Store(EBS). We will be using Helm, a package manager, to make this process more efficient.

Pre-requisites

First, ensure that the following utilities are installed and properly configured on your machine.
AWS CLI
EKSCTL
HELM

1. Create an EKS cluster
You can use either the AWS management console or EKSCTL utility to create your kubernetes cluster, for convinience, we use eksctl.
Create a file "demo-cluster.yaml" and paste the following into it.

# demo-cluster.yaml# A cluster with two managed nodegroups---apiVersion: eksctl.io/v1alpha5kind: ClusterConfigmetadata:  name: demo-cluster  region: us-west-1managedNodeGroups:  - name: managed-ng-1    instanceType: t3.small    minSize: 1    maxSize: 2  - name: managed-ng-2    instanceType: t3.small    minSize: 1    maxSize: 2

The file creates a kubernetes cluster named demo-cluster, with two managed nodegroups. To apply it, run;

eksctl create cluster -f demo-cluster.yaml

After the cluster has finished provisioning, view the nodes with the command;

kubectl get nodes

2. Create an IAM OIDC identity provider

  • Determine whether you have an existing IAM OIDC provider for your cluster.

Retrieve your cluster's OIDC provider ID and store it in a variable.

oidc_id=$(aws eks describe-cluster --name demo-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
  • Determine whether an IAM OIDC provider with your cluster's ID is already in your account.
aws iam list-open-id-connect-providers | grep $oidc_id
  • Create an IAM OIDC identity provider for your cluster with the following command;
eksctl utils associate-iam-oidc-provider --cluster demo-cluster --approve

3. Configure a Kubernetes service account to assume an IAM role

  • Create an IAM role and associate it with a Kubernetes service account. You can use either eksctl or the AWS CLI. Here we used the AWS CLI. a. Create a Kubernetes service account. Copy and paste the following contents to your terminal.
cat >my-service-account.yaml <<EOFapiVersion: v1kind: ServiceAccountmetadata:  name: ebs-csi-controller-sa  namespace: kube-systemEOFkubectl apply -f my-service-account.yaml

b. Set your AWS account ID to an environment variable with the following command.

account_id=$(aws sts get-caller-identity --query "Account" --output text)

c. Set the cluster's OIDC identity provider to an environment variable with the following command.

oidc_provider=$(aws eks describe-cluster --name demo-cluster --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

d. Set variables for the namespace and name of the service account.

export namespace=kube-systemexport service_account=ebs-csi-controller-sa

e. Run the following command on your terminal to create a trust policy file for the IAM role.

cat >aws-ebs-csi-driver-trust-policy.json <<EOF{  "Version": "2012-10-17",  "Statement": [    {      "Effect": "Allow",      "Principal": {        "Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"      },      "Action": "sts:AssumeRoleWithWebIdentity",      "Condition": {        "StringEquals": {          "$oidc_provider:aud": "sts.amazonaws.com",          "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"        }      }    }  ]}EOF

f. Create the role "AmazonEKS_EBS_CSI_DriverRole", and my-role-description with a description for your role.

aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document file://aws-ebs-csi-driver-trust-policy.json --description "my-role-description"

g. Attach the required AWS managed policy to the role with the following command.

aws iam attach-role-policy \  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \  --role-name AmazonEKS_EBS_CSI_DriverRole

h. Annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume.

kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole

4. Adding the Amazon EBS CSI add-on

To improve security and reduce the amount of work, you can manage the Amazon EBS CSI driver as an Amazon EKS add-on. You can use eksctl, the AWS Management Console, or the AWS CLI to add the Amazon EBS CSI add-on to your cluster. To add the Amazon EBS CSI add-on using the eksctl, run the following command. Remember to replace with your account ID.

eksctl create addon --name aws-ebs-csi-driver --cluster demo-cluster --service-account-role-arn arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole --force

5. Update the worker nodes role

Attach the policy "AmazonEBSCSIDriverPolicy" to the two worker node's roles for the cluster and also the cluster's ServiceRole.

6. Deploying postgres database with Helm

Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. Kubernetes is a powerful container-orchestration system for application deployment.

- Define storage class

You must define a storage class for your cluster to use and you should define a default storage class for your persistent volume claims.
To create an AWS storage class for your Amazon EKS cluster, create an AWS storage class manifest file for your storage class. The following storage-class.yaml example defines a storageclass named "aws-pg-sc" that uses the Amazon EBS gp2 volume type.

kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: aws-pg-sc  annotations:    storageclass.kubernetes.io/is-default-class: "true"provisioner: kubernetes.io/aws-ebsparameters:  type: gp2  fsType: ext4 

Use kubectl to create the storage class from the manifest file.

kubectl create -f storage-class.yaml

Run the following to view the available storageclasses in your cluster,

kubectl get storageclass

- Helm chart for postgresql

In this demo, we will leverage the helm chart for postgresql managed by bitnami. . We will be overwriting some values in values.yaml so that the chart uses the storageclass we provisioned earlier. Create a file "values-postgresdb.yaml" and paste the following into it.

primary:   persistence:      storageClass: "aws-pg-sc"auth:    username: postgres    password: demo-password   database: demo_database

- Installing the Chart

To install the chart with the release name pgdb:

helm repo add my-repo https://charts.bitnami.com/bitnamihelm install pgdb --values values-postgresdb.yaml my-repo/postgresql

After the database successfully deploys, check the PV, PVC and pod created respectively with the following commands, which should give similar outputs as the following respectively;

$kubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGEpvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            Delete           Bound    default/data-pgdb-postgresql-0   aws-pg-sc               87s
$kubectl get pvcNAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEdata-pgdb-postgresql-0   Bound    pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            aws-pg-sc      6h44m
$kubectl get podsNAME                READY   STATUS    RESTARTS   AGEpgdb-postgresql-0   1/1     Running   0          16m

You can also verify that the persistent storage was provisioned by navigating to the AWS management console >> EC2 >> Elastic Block Store >> Volumes. The screenshot attached is the volume provisioned in my case.

provisioned-volume

7. Cleaning up

To clean up and delete the kubernetes cluster we created earlier, run the following command;

 eksctl delete cluster -f demo-cluster.yaml

If the above command doesn't delete the cluster due to the presence of the pod, navigate to cloudformation console and manually delete each cloudformation stack.

Undeleted-stack


Original Link: https://dev.to/aws-builders/provisioning-a-persistent-ebs-backed-storage-on-amazon-eks-using-helm-4gh4

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To