Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 4, 2022 11:54 pm GMT

Setting up a virtual machine to run a scheduled job - Part 1 - Terraform, Serverless Framework and AWS

Hello, I'm Jos Silva, backend developer at Vaivoa, and today I'm going to teach you how to set up a ec2 machine at aws to run a cron job on a specific schedule with terraform and the serverless framework.

This article is the first of a series, which in the next steps, I'll explain how to use ansible to configure an ec2 instance dynamically to run a cron job and also configure git to always update the code before running the job.

Introduction

  • Terraform is a IaC tool, which helps you provide your cloud infrastructure with code. it's useful for versioning your infrastructure, documenting and automating your work.

  • EC2 is the AWS service for virtual machines. There's a free tier available, so don't worry about expenses(but still remember deleting all the resources created after the tutorial)

  • The serverless framework helps you deploy and configure lambda functions. We're going to write our lambda functions in python3.

The full code of the project is here: Github

Architecture

Next there's a picture representing the workflow:

Cron Triggers Lambda to Start EC2 Instance

Since we don't want the instance to be on 24/7, we'll rely on a lambda function to control the time the instance is up and running. The stop process will be taken care of by the instance itself. Since we don't know the time the job will take to execute(and for the most cases we can't predict), I'll encapsulate the job execution in a script that will take care of shutting down the instance once the job is finished.

For the scope of this part of the series, I'll focus only on starting the instance in a specific schedule.

Terraform

Let's start with our repository structure. First, terraform uses the working directory to get all information about your infrastructure. So let's create a directory just to put all the terraform config files. Every terraform command should be used inside the terraform directory.

tutorial/    terraform/        main.tf        machine.tf        vars.tf        outputs.tf

inside main.tf, let's put this code:

terraform {  required_providers {    aws = {      source  = "hashicorp/aws"      version = "3.74.3"    }  }}provider "aws" {  region = "your-favorite-region"  profile = "your-aws-profile"}
  • In the terraform part of the code, we tell which provider we're using, and which version of the provider we're using. Since I'm using version 3.74.3, if you want to know more about the terraform code i'm using, go to this version of the documentation.

  • In the provider section, we're telling which region we're going to deploy our infrastructure and what aws profile we're using to authenticate. Your profile is defined under the credentials file in your .aws/ folder.

Now, also in main.tf, we're using the data module to query the amazon linux 2 ami. It's useful because the same ami has different ids in different regions and so you don't have to look for it in the aws console:

data "aws_ami" "amazon_linux" {  most_recent = true  filter {    name = "name"    values = ["amzn2-ami-kernel-5.10-hvm*"]  }  filter {    name = "architecture"    values = ["x86_64"]  }  owners = ["137112412989"]}

The amazon linux is an improved version of linux for aws, is supposed to be stable, safe and have a good performance. It's similar to Red Hat 7, so it uses yum as a package manager, for example.

Now let's write the vars.tf file. In this file we're going to set up all the terraform variables in the code. This allows us to:

  • Avoid hard coded values so it's easier to refactor our code.
  • Modularize our environment in case we want to deploy different configurations for development and production, for example.
variable "instance_size" {  type = map(string)  default = {    "test" = "t2.micro"    "prod" = "t2.medium"  }}variable "network" {  type = object({    all_ipv4 = string    all_ipv6 = string    all_protocols = string    all_ports = number  })  default = {    all_ipv4 = "0.0.0.0/0"    all_ipv6 = "::/0"    all_protocols = "-1"    all_ports = 0  }}variable "disk" {  type = map(number)  default = {    "test" = 16    "prod" = 64  }}variable "ssh_key" {  type = string  default = "my-ssh-key"}variable "inbound_rules" {  type = object({    port = number    protocol = string    description = string  })  default = {    port = 22,    protocol = "tcp",    description = "Allow SSH"  }}

Here are the values we're using for the test environment. Feel free to change them as you want:

  • For instance type we're using the free tier, so you don't waste any money in this tutorial(as long as you still have the 750 hours per month to use).
  • For disk size, we're using 16 GB because is a reasonable size, but depending on the job you want to run, you might increase or decrease this size.

Next there's the machine.tf code. Here I'm setting up the ec2 machine configuration and the security group configuration. I'll explain each part separately.

resource "aws_instance" "tutorial_machine" {  ami = data.aws_ami.amazon_linux.id  instance_type = var.instance_size.test  key_name = var.ssh_key  security_groups = [aws_security_group.tutorial_sg.name]  root_block_device {    volume_size = var.disk.test  }  tags = {    Name = "my_tutorial_machine"  }}

Here are some explanations:

  • The aws_instance keyword means it's an ec2 virtual machine.
  • The tutorial_machine is the local name terraform uses to reference the resource in code.
  • In the ami attribute, we're referencing the amazon linux 2 ami we queried in the data module in the main.tf file.
  • In the tags attribute, we're telling what the actual name of the virtual machine will be.
  • In the security groups, we're referencing a security group we're about to define next in the machine.tf file.

Here's the rest of the code:

resource "aws_security_group" "tutorial_sg" {  name = "tutorial_sg"  description = "Tutorial Security Group"  ingress { #Inbound Rule    from_port = var.inbound_rules.port    protocol = var.inbound_rules.protocol    to_port = var.inbound_rules.port    cidr_blocks = [var.network.all_ipv4]    ipv6_cidr_blocks = [var.network.all_ipv6]    description = var.inbound_rules.description  }  egress { # Outbond Rule    from_port = var.network.all_ports    protocol = var.network.all_protocols    to_port = var.network.all_ports    cidr_blocks = [var.network.all_ipv4]    ipv6_cidr_blocks = [var.network.all_ipv6]  }  tags = {    Name = "Allow SSH Only"  }}

In the console, we can set up the security group or create a new one, if we create a new one, there's the default outbond rules allowing every traffic to the internet. With terraform, the security group has no default rule, so we have to specify everything we want.

In this security group we're setting two rules:

  • Allow all network traffic exiting the machine
  • Only allow traffic entering the machine through the ssh port number 22 from all hosts, both ipv4 and ipv6.

The 22 port is the default for ssh connections, which is something extremely important when configuring or using a virtual machine. In this part of the series we won't use it, but later we will.

Feel free to open more ports if your job needs it.

Finally, we're setting up some outputs to know some basic information about our deployed ec2 instance.

output "tutorial_public_dns" {  value = aws_instance.tutorial_machine.public_dns}output "tutorial_instance_id" {  value = aws_instance.tutorial_machine.id}

Don't forget to change all the variables placeholders for your actual variables, like profile name, ssh key name, etc.

Now let's run some terraform commands:

  • Run terraform init inside the terraform folder. This command will install the aws provider runtime terraform relies on to understand what "aws_instance" or "aws_security_group" means.
  • Then run terraform plan to get a sample output of what might happen when you actually deploy your infrastructure and verify if there's any sintax errors in your code.
  • Now run terraform apply to deploy your machine. Terraform will show you what will be deployed again and will ask you to confirm your action by typing "yes" in the terminal.

After confirmation, you can go to your aws console, in the ec2 instances panel, and see your tutorial instance right there up and running!

Since we don't want our instance to be up and running 24/7, let's stop the instance and move on to the final part of the tutorial.

Lambda Functions & Serverless Framework

Now let's add a lambda directory to the tutorials folder and put the python script in it to start the ec2 instance, and the serverless.yml file to configure the lambda function.

tutorial/    terraform/        main.tf        machine.tf        vars.tf        outputs.tf    lambda/        main.py        serverless.yml

Under the lambda folder, in the main.py file, let's write our python code to start the instance based on the name tag:

import boto3def handler(event, context):    ec2 = boto3.client("ec2")    response = ec2.describe_instances(        Filters=[            {                'Name': "tag:Name",                'Values': [                    'my_tutorial_machine'                ]            }        ]    )    instance_id = response['Reservations'][0]['Instances'][0]['InstanceId']    response = ec2.start_instances(        InstanceIds=[            instance_id        ]    )    print(response)if __name__ == '__main__':    handler({}, None)

Make sure the name tag is the same in both the main.py and the machine.tf.

I'm using boto3, which is the AWS SDK for Python3 and can be installed via pip. This SDK allows us to send API calls to aws in a easier way to start, stop, modify ec2 instances and much more.

The start instance operation relies on the instance id, which is something that changes for every deploy. Since we're using terraform and might deploy and redeploy our virtual machine a couple of times, the instance id will change accordingly, and every time it changes we'll have to adapt our python script. So we're first querying the instance information by name, which is something that doesn't change for every deploy.

Finally, let's write our serverless.yml, configuring the lambda runtime, stage, service name, aws region and IAM role statements.

service: tutorial-initializerprovider:  name: aws  stage: development  runtime: python3.6  region: your-favorite-region  iamRoleStatements:    - Effect: Allow      Action:        - ec2:DescribeInstances        - ec2:StartInstances      Resource: "*"functions:  initializer:    handler: main.handler    events:      - schedule:          enabled: true          rate: your-cron-expression    memorySize: 128    timeout: 30

In the iamRoleStatements we're allowing our lambda function to perform the StartInstances and the DescribeInstances on any ec2 instance.

Here are some important things to know about the serverless.yml file:

  • In the functions part of the configuration, we're saying that the initializer lambda function will be the handler function in the main.py file.
  • The events part defines the cron schedule that will trigger the lambda function once the time is right.
  • The timeout defines the maximun time in seconds your lambda function is allowed to run. In this case 30 seconds is more than enough.
  • memorySize defines how much memory your function might have in MB. AWS uses this parameter to bill you(lambda function memory size multiplied by the total time of execution), so try to use a low value in here.
  • Don't forget to replace the region and cron expression placeholders with your preferred region.
  • Also, keep in mind the schedule rate is based on the region time zone where your lambda is deployed. So calculate the difference between you local timezone and the aws region timezone to get the desired result.

Conclusion

Now you learned how to set up a virtual machine in AWS EC2 using free tier and setting up a lambda function to start the machine based on a cron schedule.

In the next modules I'll teach about configuring the machine with ansible, setting up the cron job to start on boot and updating the code with git.

The full code of the project is here: Github

To avoid further costs, after the completion of this tutorial, please run:

  • terraform destroy in your terraform folder, to destroy your ec2 instance and security group
  • sls remove in your lambda folder, to remove all of your deployed resources related to the serverless framework.
.ltag__user__id__779693 .follow-action-button { background-color: #000000 !important; color: #ffffff !important; border-color: #000000 !important; }
josevictorps image

linha horizontal

Disclaimer

A VaiVoa incentiva seus Desenvolvedores em seu processo de crescimento e acelerao tcnica. Os artigos publicados no traduzem a opinio da VaiVoa. A publicao obedece ao propsito de estimular o debate.

logo vaivoa


Original Link: https://dev.to/vaivoa/setting-up-a-virtual-machine-to-run-a-scheduled-job-part-1-terraform-serverless-framework-and-aws-o43

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To