Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 17, 2021 10:14 pm GMT

Terraform v15.0 with AWS (EKS deployment) just

Terraform v15 was released on April 14th.

On this post I will use the following resources:

Provision an EKS Cluster (AWS)
Terraform v15.0
Terraform Registry
Pre-Commit
Terraform Pre-commit
Terraform-docs
Tflint
Tfsec

This is based in the Provision an EKS Cluster (AWS) where it tweak it a little bit setting up some variables, breaking the .tf files and setting up some new providers and modules. Also I will test a bunch of features, like terraform-docs, tflint and tfsec with a pre-commit git-hook. Let's start!

Before start we should check the needed AWS resources for this task:

1 x Amazon VPC6 x Amazon Subnet (3 x Public + 3 x Private)3 x Amazon EC21 x Amazon EKS1 x Kubernetes AWS-Auth policy

Because some AWS modules are not available yet, I will create my own modules in my deployment.

First of all I will generate my project folder:

CMD> mdkir terraform-awsCMD> cd terraform-aws

Then I will create some base terraform files:

CMD> touch {main,outputs,variables,versions}.tfCMD> ls -ltotal 16-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:01 main.tf-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:01 outputs.tf-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:16 variables.tf-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:17 versions.tf

And enable the pre-commit in the project:

CMD> echo 'repos:- repo: git://github.com/antonbabenko/pre-commit-terraform  rev: master  hooks:  - id: terraform_fmt  - id: terraform_validate  - id: terraform_docs  - id: terraform_docs_without_aggregate_type_defaults  - id: terraform_tflint    args:    - 'args=--enable-rule=terraform_documented_variables'  - id: terraform_tfsec- repo: https://github.com/pre-commit/pre-commit-hooks  rev: master  hooks:  - id: check-merge-conflict  - id: end-of-file-fixer' > .pre-commit-config.yamlCMD> pre-commit installpre-commit installed at .git/hooks/pre-commit

Let's start setting up "versions.tf" file, that will contain our relative provider versions:

# Providers version# Ref. https://www.terraform.io/docs/configuration/providers.htmlterraform {  required_version = "~>0.15"  required_providers {    # Base Providers    random = {      source  = "hashicorp/random"      version = "3.1.0"    }    null = {      source  = "hashicorp/null"      version = "3.1.0"    }    local = {      source = "hashicorp/local"      version = "2.1.0"    }    template = {      source = "hashicorp/template"      version = "2.2.0"    }    # AWS Provider    aws = {      source  = "hashicorp/aws"      version = "3.37.0"    }    # Kubernetes Provider    kubernetes = {      source  = "hashicorp/kubernetes"      version = "2.1.0"    }      }}

Then I manage to set some variables for the project:

# Commonvariable "project" {  default     = "cosckoya"  description = "Project name"}variable "environment" {  default     = "laboratory"  description = "Environment name"}# Amazonvariable "region" {  default     = "us-east-1"  description = "AWS region"}variable "vpc_cidr" {  type        = string  default     = "10.0.0.0/16"  description = "AWS VPC CIDR"}variable "public_subnets_cidr" {  type        = list(any)  default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]  description = "AWS Public Subnets"}variable "private_subnets_cidr" {  type        = list(any)  default     = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]  description = "AWS Private Subnets"}

This is my main.tf file. Here I changed the "locals" to set some tags as a "tomap(..)" function, update d the modules to the last version, also updated the Kubernetes version to 1.19... just to test and have fun.

provider "aws" {  profile    = "default"  region     = var.region  access_key = "AKIA.."  secret_key = "<SECRET-KEY-HERE"}data "aws_availability_zones" "available" {}locals {  cluster_name = "${var.project}-${var.environment}-eks"  tags = tomap({"Environment" = var.environment, "project" = var.project})}resource "random_string" "suffix" {  length  = 8  special = false}## Amazon Networkingmodule "vpc" {  source = "terraform-aws-modules/vpc/aws"  version = "2.78.0"  name                 = "${var.project}-${var.environment}-vpc"  cidr                 = var.vpc_cidr  azs                  = data.aws_availability_zones.available.names  private_subnets      = var.public_subnets_cidr  public_subnets       = var.private_subnets_cidr  enable_nat_gateway   = true  single_nat_gateway   = true  enable_dns_hostnames = true  tags = {    "kubernetes.io/cluster/${local.cluster_name}" = "shared"  }  public_subnet_tags = {    "kubernetes.io/cluster/${local.cluster_name}" = "shared"    "kubernetes.io/role/elb"                      = "1"  }  private_subnet_tags = {    "kubernetes.io/cluster/${local.cluster_name}" = "shared"    "kubernetes.io/role/internal-elb"             = "1"  }}## Amazon EKSmodule "eks" {  source          = "terraform-aws-modules/eks/aws"  cluster_name    = local.cluster_name  cluster_version = "1.19"  subnets         = module.vpc.private_subnets  tags = local.tags  vpc_id = module.vpc.vpc_id  workers_group_defaults = {    root_volume_type = "gp2"  }  worker_groups = [    {      name                          = "worker-group-1"      instance_type                 = "t2.small"      additional_userdata           = "echo foo bar"      asg_desired_capacity          = 2      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]    },    {      name                          = "worker-group-2"      instance_type                 = "t2.medium"      additional_userdata           = "echo foo bar"      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]      asg_desired_capacity          = 1    },  ]}data "aws_eks_cluster" "cluster" {  name = module.eks.cluster_id}data "aws_eks_cluster_auth" "cluster" {  name = module.eks.cluster_id}## Amazon Security Groupsresource "aws_security_group" "worker_group_mgmt_one" {  name_prefix = "worker_group_mgmt_one"  vpc_id      = module.vpc.vpc_id  ingress {    from_port = 22    to_port   = 22    protocol  = "tcp"    cidr_blocks = [      "10.0.0.0/8",    ]  }}resource "aws_security_group" "worker_group_mgmt_two" {  name_prefix = "worker_group_mgmt_two"  vpc_id      = module.vpc.vpc_id  ingress {    from_port = 22    to_port   = 22    protocol  = "tcp"    cidr_blocks = [      "192.168.0.0/16",    ]  }}resource "aws_security_group" "all_worker_mgmt" {  name_prefix = "all_worker_management"  vpc_id      = module.vpc.vpc_id  ingress {    from_port = 22    to_port   = 22    protocol  = "tcp"    cidr_blocks = [      "10.0.0.0/8",      "172.16.0.0/12",      "192.168.0.0/16",    ]  }}

These outputs are the "default" outputs in the samples.

output "cluster_id" {  description = "EKS cluster ID."  value       = module.eks.cluster_id}output "cluster_endpoint" {  description = "Endpoint for EKS control plane."  value       = module.eks.cluster_endpoint}output "cluster_security_group_id" {  description = "Security group ids attached to the cluster control plane."  value       = module.eks.cluster_security_group_id}output "kubectl_config" {  description = "kubectl config as generated by the module."  value       = module.eks.kubeconfig}output "config_map_aws_auth" {  description = "A kubernetes configuration to authenticate to this EKS cluster."  value       = module.eks.config_map_aws_auth}output "region" {  description = "AWS region"  value       = var.region}output "cluster_name" {  description = "Kubernetes Cluster Name"  value       = local.cluster_name}

Time to have fun now. Let's play with this:

Initialize the project

CMD> terraform init

In here we should test the pre-commit rules that we setted up, take note of every Tfsec error about the security compliance. Try to resolve each or comment them with this docs

Also create a README.md file with the following lines:

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --><!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

This will generate some information relative to the project.
Run pre-commit with

CMD> pre-commit run -a

Check README.file it should look like this:

Requirements

NameVersion
terraform~>0.15
aws3.37.0
kubernetes2.1.0
local2.1.0
null3.1.0
random3.1.0
template2.2.0

Providers

NameVersion
aws3.37.0
random3.1.0

Modules

NameSourceVersion
eksterraform-aws-modules/eks/aws
vpcterraform-aws-modules/vpc/aws2.78.0

Resources

NameType
[aws_security_group.all_worker_mgmt](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources/secur
ity_group)resource
[aws_security_group.worker_group_mgmt_one](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources
/security_group)resource
[aws_security_group.worker_group_mgmt_two](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources
/security_group)resource
random_string.suffixresourc
e
[aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/data-sources/avai
lability_zones)data source
aws_eks_cluster.cluster
data source
[aws_eks_cluster_auth.cluster](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/data-sources/eks_clus
ter_auth)data source

Inputs

NameDescriptionTypeDefaultRequired
environmentEnvironment namestring"laboratory"n
o
private_subnets_cidrAWS Private Subne
tslist(any)
[
"10.0.4.0/24",
"10.0.5.0/24",
"10.0.6.0/24"
]
no
projectProject namestring"cosckoya"no
public_subnets_cidrAWS Public Subnets
list(any)
[
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
no
regionAWS regionstring"us-east-1"no
vpc_cidrAWS VPC CIDRstring"10.0.0.0/16"no

Outputs

NameDescription
cluster_endpointEndpoint for EKS control plan
e.
cluster_idEKS cluster ID.
cluster_nameKubernetes Cluster Name
cluster_security_group_id
Security group ids attached to the cluster control plane.
config_map_aws_authA kubernetes con
figuration to authenticate to this EKS cluster.
kubectl_configkubectl config as generated by the
module.
regionAWS region

[...] Let's continue with the Terraform project. Now it's time to deploy!
Plan the project

CMD> terraform plan

Deploy the project

CMD> terraform apply 

Connect to the cluster and enjoy!

CMD> aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

Running some basic commands we can see that the cluster is up and running:

CMD> kubectl cluster-infoKubernetes control plane is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.comCoreDNS is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyCMD> kubectl get nodesNAME                         STATUS   ROLES    AGE   VERSIONip-10-0-2-138.ec2.internal   Ready    <none>   26m   v1.18.9-eks-d1db3cip-10-0-2-88.ec2.internal    Ready    <none>   26m   v1.18.9-eks-d1db3cip-10-0-3-68.ec2.internal    Ready    <none>   26m   v1.18.9-eks-d1db3c

And this it. Enjoy!

Ps. As you could see this is so similar to the AWS Terraform Learn page. Little tweaks to test some changes between versions.

I'm a very big fan of @antonbabenko work. I recommend everyone to follow him.


Original Link: https://dev.to/cosckoya/terraform-v15-0-with-aws-eks-deployment-42hh

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To