Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
June 19, 2020 06:40 pm GMT

End To End AWS Cloud Infrastructure Automation Through Terraform by HashiCorp

Starting With...

What Is Cloud Computing?

"First to mind when asked what 'the cloud' is, a majority respond its either an actual cloud, the sky, or something related to weather."

Simply put, cloud computing is the delivery of computing servicesincluding servers, storage, databases, networking, software, analytics, and intelligenceover the Internet (the cloud) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.

What Is AWS?

Alt Text

Amazon Web Services (AWS) is the worlds most comprehensive and broadly adopted cloud platform, offering over 212 fully featured services from data centers globally and is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.

In 2020, AWS comprised more than 212 services including computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools, and tools for the Internet of Things.

Cloud Automation: What You Need to Know and Why It's Important

Alt Text

Cloud computing is boldly going where no other system has gone before. With so many organizations moving to some type of cloud platform, providers are finding more ways to create the true fully automated cloud environment. Now, we havent quite reached that point, but were getting closer.

Cloud automation is a blanket term that is often used to denote specialized software, tools, and operations that help us reduce the manual effort when it comes to deploying and maintaining cloud-based IT infrastructure. Simply put, it is automating tasks programmatically.

One key reason why automation is so widely embraced and used almost everywhere is that automation reduces the manual effort and intervention needed to deploy a set of tasks.

The Best Tools for Cloud Infrastructure Automation Are,

Nowadays, Hashicorp Terraform is the most widely used across organisations to implement Infrastructure as code(IaC).

There are four broad categories of IaC tools:

  • Ad hoc scripts
  • Configuration management tools
  • Server templating tools
  • Server provisioning tools

What is Terraform?

Alt Text

Terraform is an open source Infrastructure as Code tool, created by HashiCorp. for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

A declarative coding tool, Terraform enables developers to use a high-level configuration language called HCL (HashiCorp Configuration Language) to describe the desired end-state cloud or on-premises infrastructure for running an application. It then generates a plan for reaching that end-state and executes the plan to provision the infrastructure.

Terraform supports a number of Public and Private cloud infrastructure providers such as Amazon Web Services(AWS), IBM Cloud (formerly Bluemix), Alibaba Cloud, Google Cloud Platform, DigitalOcean, Linode, Microsoft Azure, Oracle Cloud Infrastructure, OVH, Scaleway, VMware vSphere or Open Telekom Cloud as well as OpenNebula and Openstack.

Umm... Why One Should Use Terraform?

Yeah it might be sounding unrealistic that without knowing any Cloud Commands we can create the whole cloud infrastructure setup. But I am giving you the guarantee after reading this post you will come to know the real power of Terraform.

For using any cloud they have their own beautiful and dynamic GUI Portal which is User Friendly and Easy to use. We can go and use any services on it by just a few clicks. But nowadays everyone needs automation and we can't automate things using only GUI.

Also if we have created one Infrastructure on cloud and our requirement is to create exactly the same infrastructure again, then if we use GUI it will consume time and as a human work, we can miss something. So here is the role of Infrastructure as Code (IaC) comes in play. Developers will write a code to create the Infrastructure and when we run this code, the complete Infrastructure will be created with just a single command and if we want to destroy that Infrastructure then also we run a single command and complete infrastructure will be destroyed.

Therefore, I have created a powerful Terraform setup to help you achieve the complete end to end Cloud Infrastructure Automation and here i chose AWS as a Cloud Infrastructure Provider.

before get into the code we need to setup few things...

The Real Thrill Starts Now Onwards!!

Alt Text

  • To use AWS services, it is required to sign in to the AWS Console for that you must have AWS account with you.
    Click here to create new AWS account.

    • Note: A credit card is required for AWS account. But due to the pandemic, AWS Educate was introduced so you may be able to signup for an account without a credit card required.
  • AWS CLI installed and added to the path

    • Configure your AWS CLI (covering in a second...)
  • Terraform installed and added to the path

  • In order to follow the best practices, lets create a user.

    • keep your AWS credential file saved (which you got while creating new user)
  • Quickly configuring the AWS CLI:

$ aws configureAWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLEAWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYDefault region name [None]: us-west-2Default output format [None]: json

The AWS CLI stores this information in a profile (a collection of settings) named default.

Now let's Set The Workflow For Our Project

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance
  • In this Ec2 instance use the key and security group which we have created
  • Launch one Volume (EBS) and mount that volume into /var/www/html
  • Developer have uploded the code into github repository also the repository has some images
  • Copy the github repository code into /var/www/html
  • Create S3 bucket, and copy/deploy the images from github repository into the s3 bucket and change the permission to public readable
  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
  • create snapshot of ebs

Starting to do terraform Cloud Infrastructure Automation!

  • first we write the Website code and push it to the remote Github repository then write the .tf code where it clone the website code to the remote AWS instanse and more...
  • Website Code which we want to deploy
  • We will write terraform code to create the Cloud Infrastructure

Website Code

<!DOCTYPE html> <html> <head>     <title>         Slack Invitation Link - hrshmistry    </title>     <!-- Style to create button -->    <style>         .SLK {             background-color: white;             border: 3px solid black;             color: green;            text-align: center;             display: inline-block;             font-size: 80px;             cursor: pointer;         }     </style> </head> <body >     <center style="font-size:70px;color:red;font-family:'Courier New'">Hybrid Multi-Cloud</center>    <p align = "center"><img src="https://s3hrsh.s3.ap-south-1.amazonaws.com/cloud.jpg" width="300" height="300"></p>    <center style="font-size:40px;color:blue;font-family:'Courier New'">Your Personal Slack Invitation Link</center>    <!-- Adding link to the button on the onclick event --><center>    <button  class="SLK"    onclick="window.location.href = 'https://join.slack.com/t/hybridmulti-cloud/shared_invite/zt-etnyk2vm-gngGCm2hnk1VbOPR9nGpnw';">         JOIN NOW    </button></center> </body> </html>

Terraform Code

  • Provider
#Describing Providerprovider "aws" {  region     = "ap-south-1"  profile    = "harsh"}
  • Variables
#Creating Variable for AMI Idvariable "ami_id" {  type    = string  default = "ami-0447a12f28fddb066"}#Creating Variable for AMI Typevariable "ami_type" {  type    = string  default = "t2.micro"}#Creating Variable for keyvariable "EC2_Key" {  type    = string  default = "Task1Key"}
  • Key-Pair
#Creating tls_private_key using RSA algorithm resource "tls_private_key" "tls_key" {  algorithm = "RSA"  rsa_bits  = 4096}#Generating Key-Value Pairresource "aws_key_pair" "generated_key" {  depends_on = [    tls_private_key.tls_key  ]  key_name   = var.EC2_Key  public_key = tls_private_key.tls_key.public_key_openssh}#Saving Private Key PEM Fileresource "local_file" "key-file" {  depends_on = [    tls_private_key.tls_key  ]  content  = tls_private_key.tls_key.private_key_pem  filename = var.EC2_Key}
  • Security-group
resource "aws_security_group" "firewall" {  depends_on = [      aws_key_pair.generated_key  ]  name         = "firewall"  description  = "allows ssh and httpd protocol"  #Adding Rules to Security Group  ingress {    description = "SSH Port"    from_port   = 22    to_port     = 22    protocol    = "tcp"    cidr_blocks = ["0.0.0.0/0"]  }  ingress {    description = "HTTPD Port"    from_port   = 80    to_port     = 80    protocol    = "tcp"    cidr_blocks = ["0.0.0.0/0"]  }   egress {    from_port   = 0    to_port     = 0    protocol    = "-1"    cidr_blocks = ["0.0.0.0/0"]  }  egress {    from_port   = 0    to_port     = 0    protocol    = "-1"    cidr_blocks = ["0.0.0.0/0"]  }  tags = {    Name = "security-group-1"  }}
  • Launch EC2 instance
resource "aws_instance" "autos" {  depends_on = [      aws_security_group.firewall  ]  ami           = var.ami_id  instance_type = var.ami_type  key_name      = var.EC2_Key  security_groups = ["${aws_security_group.firewall.name}"]  connection {    type     = "ssh"    user     = "ec2-user"    private_key = tls_private_key.tls_key.private_key_pem    host     = aws_instance.autos.public_ip  }  provisioner "remote-exec" {    inline = [      "sudo yum install httpd php git -y",      "sudo systemctl restart httpd",      "sudo systemctl enable httpd"    ]  }  tags = {    Name = "autos"    env  = "Production"  }}
  • EBS Volume
#Creating EBS volume and attaching it to EC2 Instance.resource "aws_ebs_volume" "ebs" {  availability_zone = aws_instance.autos.availability_zone  size              = 1  tags = {    Name = "autos_ebs"  }}/*variable "volume_name" {  type    = string  default = "dh"}*/resource "aws_volume_attachment" "ebs_att" {  device_name = "/dev/sdh"  volume_id   = aws_ebs_volume.ebs.id  instance_id = aws_instance.autos.id  force_detach = true}output "autos_public_ip" {  value = aws_instance.autos.public_ip}resource "null_resource" "print_public_ip" {  provisioner "local-exec" {    command = "echo ${aws_instance.autos.public_ip} > autos_public_ip.txt"  }}
  • Mounting the Volume in EC2 Instance and Cloning GitHub
resource "null_resource" "mount_ebs_volume" {  depends_on = [    aws_volume_attachment.ebs_att  ]  connection {    type     = "ssh"    user     = "ec2-user"    private_key = tls_private_key.tls_key.private_key_pem    host     = aws_instance.autos.public_ip  }  provisioner "remote-exec" {      inline = [        "sudo mkfs.ext4 /dev/xvdh",        "sudo mount /dev/xvdh /var/www/html",        "sudo rm -rf /var/www/html",        "sudo git clone https://github.com/hrshmistry/Code-Cloud.git /var/www/html/"      ]    }}
  • Creating S3 bucket
resource "aws_s3_bucket" "S3" {  bucket = "autos-s3-bucket"  acl    = "public-read"}#Putting Objects in S3 Bucketresource "aws_s3_bucket_object" "S3_Object" {  depends_on = [    aws_s3_bucket.S3  ]  bucket = aws_s3_bucket.S3.bucket  key    = "Cloud.JPG"  source = "D:/LW/Hybrid-Multi-Cloud/Terraform/tera/task/Cloud.JPG"  acl    = "public-read"}
  • Creating CloutFront with S3 Bucket Origin
locals {  S3_Origin_Id = aws_s3_bucket.S3.id}resource "aws_cloudfront_distribution" "CloudFront" {  depends_on = [    aws_s3_bucket_object.S3_Object  ]  origin {    domain_name = aws_s3_bucket.S3.bucket_regional_domain_name    origin_id   = aws_s3_bucket.S3.id    # OR origin_id   = local.S3_Origin_Id  }  enabled             = true  is_ipv6_enabled     = true  comment             = "S3 Web Distribution"  default_cache_behavior {    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]    cached_methods   = ["GET", "HEAD"]    target_origin_id = aws_s3_bucket.S3.id    # OR origin_id   = local.S3_Origin_Id    forwarded_values {      query_string = false      cookies {        forward = "none"      }    }    viewer_protocol_policy = "allow-all"    min_ttl                = 0    default_ttl            = 3600    max_ttl                = 86400  }  # Cache behavior with precedence 0  ordered_cache_behavior {    path_pattern     = "/content/immutable/*"    allowed_methods  = ["GET", "HEAD", "OPTIONS"]    cached_methods   = ["GET", "HEAD", "OPTIONS"]    target_origin_id = aws_s3_bucket.S3.id    # OR origin_id   = local.S3_Origin_Id    forwarded_values {      query_string = false      headers      = ["Origin"]      cookies {        forward = "none"      }    }    min_ttl                = 0    default_ttl            = 86400    max_ttl                = 31536000    compress               = true    viewer_protocol_policy = "redirect-to-https"  }  # Cache behavior with precedence 1  ordered_cache_behavior {    path_pattern     = "/content/*"    allowed_methods  = ["GET", "HEAD", "OPTIONS"]    cached_methods   = ["GET", "HEAD"]    target_origin_id = aws_s3_bucket.S3.id    # OR origin_id   = local.S3_Origin_Id    forwarded_values {      query_string = false      cookies {        forward = "none"      }    }    min_ttl                = 0    default_ttl            = 3600    max_ttl                = 86400    compress               = true    viewer_protocol_policy = "redirect-to-https"  }  price_class = "PriceClass_200"  restrictions {    geo_restriction {      restriction_type = "whitelist"      locations        = ["IN"]    }  }  tags = {    Name        = "CF Distribution"    Environment = "Production"  }  viewer_certificate {    cloudfront_default_certificate = true  }  retain_on_delete = true}
  • Changing the html code and adding the image url in that
resource "null_resource" "CF_URL"  {  depends_on = [    aws_cloudfront_distribution.CloudFront  ]  connection {    type     = "ssh"    user     = "ec2-user"    private_key = tls_private_key.tls_key.private_key_pem    host     = aws_instance.autos.public_ip  }  provisioner "remote-exec" {    inline = [      "echo '<p align = 'center'>'",      "echo '<img src='https://${aws_cloudfront_distribution.CloudFront.domain_name}/Cloud.JPG' width='100' height='100'>' | sudo tee -a /var/www/html/Slack.html",      "echo '</p>'"    ]  }}
  • Creating EBS snapshot volume
resource "aws_ebs_snapshot" "ebs_snapshot" {  depends_on = [   null_resource.CF_URL  ]  volume_id = aws_ebs_volume.ebs.id  tags = {    Name = "ebs_snap"  }}
  • accessing the infrastructure
resource "null_resource" "web-server-site-on-browser" {  depends_on = [    null_resource.CF_URL  ]  provisioner "local-exec" {    command = "brave ${aws_instance.autos.public_ip}/Slack.html"  }}
  • After completing the terraform code, to deploy the whole Cloud Infrastructure only one single command is needed!
terraform apply -auto-approve
  • To destroy entire Cloud Infrastructure,
terraform destroy -auto-approve

So here we perfectly completed the End To End AWS Cloud Infrastructure Automation Through Terraform by HashiCorp!

  • For the Complete code, visit my Github Repository below,

GitHub logo hrshmistry / Cloud-Automation-Terraform

End To End AWS Cloud Infrastructure Automation Through Terraform by HashiCorp

End To End AWS Cloud Infrastructure Automation Through Terraform by HashiCorp

Things Included In This Project Are...

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance.
  • In this Ec2 instance use the key and security group which we have created.
  • Launch one Volume (EBS) and mount that volume into /var/www/html.
  • Developer have uploded the code into github repo also the repo has some images.
  • Copy the github repo code into /var/www/html
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
  • create snapshot of ebs

Above task is done only using terraform!


Alt Text

here, I challenge you to do the End To End AWS Cloud Infrastructure Automation

for any doubts and query feel free to leave the comment

Thank you everyone


Original Link: https://dev.to/hrshmistry/end-to-end-aws-cloud-infrastructure-automation-through-terraform-by-hashicorp-2gkl

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To