Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
June 7, 2022 03:31 pm GMT

Building Distributed Systems on AWS

In this article we will see how we can create distributed services architecture. Using Docker Containers, NodeJS, Python/Flask application, and with several AWS resources

The appeal of distributed computing lies in the ability to harness the power of multiple, often parallel compute resources and take advantage of modern cloud computing offerings to enable almost unlimited scaling

Let us split our blog into 4 main parts:

  1. Architectural Overview
  2. AWS Resources
  3. Development
  4. Deployment

1. Architectural Overview

Containers are packages of software that contain all of the necessary elements to run in any environment. In this way, containers virtualize the operating system and run anywhere, from a private data center to the public cloud or even on your local machine

Our application flow
Image description

In our project will be using docker containers, think of it as if similar to running a process except that its sharing the host OS kernel, lightweight can be scaled easily and rapidly. You can run many Docker containers, each one of them can self assign resources on the need of the application. From here we can use the word 'Agility' which refers to getting features and updates done faster

Containerized Architecture Key points

  • Consistent and Isolated Environment
  • Mobility Ability to Run Anywhere
  • Repeatability and Automation
  • Test, Roll Back and Deploy
  • Flexibility
  • Collaboration, Modularity and Scaling

Image description

Our 4 Containers

  • container-main: exposed to the public, will handle API calls
  • container-process: a private container can communicate with our main container and it is not exposed to the public
  • container-consume: a queue consumer container that will consume queue that are produced to the SQS by the main container
  • container-xray: will listen our traffic and capture http connections, so that later you can audit them from the console

As we can see each container has exactly what it needs, certain versions of a language or libraries

2. AWS Resources

SQS
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications

In our architecture one of our container will be producing message, and the other one will consume messages

  • Producer: Endpoints that enqueue messages into a queue
  • Consumer: Triggered when a message is passed to the queue

X-Ray
AWS X-Ray is a service that helps developers analyze and debug distributed applications. You can use X-Ray to monitor application traces, including the performance of calls to other downstream components or services

  • A segment records tracing information about a request that your application serves
  • Annotations are simple key-value pairs that are indexed for use with filter expressions
  • Use segment metadata to record additional data that you want stored in the trace but don't need to use with search

X-Ray Daemon
The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service

Instead of sending trace data directly to X-Ray, the SDK sends JSON segment documents to a daemon process listening for UDP traffic

The X-Ray daemon buffers segments in a queue and upload them to X-Ray in batches

CloudWatch
Amazon CloudWatch is a monitoring and management service that provides data and actionable insights for AWS

3. Development

IAM
In order to be able to use the AWS cli you need to create an IAM user, and give it the appropriate permissions. (For example our docker compose up command will deploy/create multiple AWS resources, ECS, Security Group etc.. so our IAM user needs to have the required permissions in order to be able to do so)

NodeJS Image (container-main, container-consume)

FROM node:alpineWORKDIR /usr/src/appCOPY package*.json ./RUN npm installCOPY . .EXPOSE 8080CMD ["node", "index.js"]

Python Image (container-process) (Python stack is optional, our main purpose of using it, is to showcase how we can have multiple different languages in a distributed architecture)

FROM python:3WORKDIR /usr/src/appCOPY . .RUN pip install --no-cache-dir -r requirements.txtEXPOSE 8082CMD ["python", "app.py"]

X-Ray Daemon Image (container-xray)
Ref: https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

FROM amazonlinuxRUN yum install -y unzipRUN curl -o daemon.zip https://s3.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zipRUN unzip daemon.zip && cp xray /usr/bin/xrayENTRYPOINT ["/usr/bin/xray", "-t", "0.0.0.0:2000", "-b", "0.0.0.0:2000"]EXPOSE 2000/udpEXPOSE 2000/tcp

Docker compose

version: '3'x-environment:  &default-environment  AWS_REGION: ${AWS_REGION}  AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}  AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}  AWS_XRAY_DAEMON_ADDRESS: xray:2000  MAIN_SQS_QUEUE_URL: ${MAIN_SQS_QUEUE_URL}services:  xray:    image: ${XRAY_IMAGE}    container_name: XRAY    build:      context: ./container-xray      dockerfile: ./Dockerfile    environment: *default-environment    command: --local-mode    ports:      - "2000:2000/udp"    networks:      mynet:        ipv4_address: 172.19.20.1  main:    image: ${MAIN_IMAGE}    container_name: MAIN    build:      context: ./container-main      dockerfile: ./Dockerfile    depends_on:      - xray    ports:      - "8080:8080"    environment: *default-environment    networks:      mynet:        ipv4_address: 172.19.10.1  consume:    image: ${CONSUME_IMAGE}    container_name: CONSUME    build:      context: ./container-consume      dockerfile: ./Dockerfile    environment: *default-environment    networks:      mynet:        ipv4_address: 172.19.10.2  process:    image: ${PROCESS_IMAGE}    container_name: PROCESS    build:      context: ./container-process      dockerfile: ./Dockerfile    environment: *default-environment    networks:      mynet:        ipv4_address: 172.19.10.3networks:  mynet:    driver: bridge    ipam:      driver: default      config:        - subnet: 172.19.0.0/16

Docker compose key features

  • Run multiple isolated environments
  • Parallel execution model
  • Compose file change detection
  • Open source

Some of the command lines that we are going to use during the project

  • docker-compose up -d --build builds, creates, starts and attaches to service containers, also performs change detection (-d is for building in detached mode)
  • docker-compose down removes service containers, named and default networks. leaves volumes and images by default
  • docker ps -a in order to list our containers
  • docker-compose config in order to view your compose file with its environment variables values
  • you can always use docker-compose --help to see all the options you have

Regarding our .env file, you can check the .env.sample to fill all the required variables, and create new .env file with the same values

AWS_ACCESS_KEY_ID=<account_access_key>AWS_SECRET_ACCESS_KEY=<account_secret>AWS_REGION=<region>MAIN_SQS_QUEUE_URL=<sqs_fifo_url>MAIN_IMAGE=<ecr_image_for_container_main>PROCESS_IMAGE=<ecr_image_for_container_process>CONSUME_IMAGE=<ecr_image_for_container_consume>XRAY_IMAGE=<ecr_image_for_container_xray>

Create new SQS FIFO
FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated. Once the Queue is created, make sure to copy the Url and paste it inside our environments variable file (.env)

Communication logic going on

  • Our container-main can communicate with the private container which is named 'process', so basically we can do a regular API call to http://process:8082/api/process (by the help of AWS Cloud Map and its service discovery we are able to do so)
  • We can send SQS message, by using aws-sdk.
  • Our container-consume is listening to our SQS queue every time we have a message it consumes it

[note] the project is open source and you can access the link at the end of this blog

4. Deployment

First thing we need to push our images to Amazon Elastic Container Registry (Amazon ECR), which is an AWS managed container image registry service

  • private repository does not offer content search capabilities and requires Amazon IAM-based authentication using AWS account credentials before allowing images to be pulled
  • public repository has descriptive content and allows anyone anywhere to pull images without needing an AWS account or using IAM credentials

Using AWS CLI, you need to run 4 command lines, first you login to ECR cli and docker, then build your docker image, tag, and push it, these command lines can be found once you click on "View push commands" inside the ECR repository that you created

In order to deploy our application we will be using docker compose up command, by default, the Docker Compose CLI creates an ECS cluster for your Compose application, a Security Group per network in your Compose file on your AWS account's default VPC, a LoadBalancer to route traffic to your services, and it also attaches Task execution IAM role with 2 IAM roles for your ECS Task Definition (AmazonEC2ContainerRegistryReadOnly & AmazonECSTaskExecutionRolePolicy)

[note] make sure to check if you can run 2 tasks concurrently, if no you need to open request at AWS Support Center in order to add your limit

In order to deploy our architecture we need to run the following command lines

docker context create ecs <your_ecs_context_name>docker context use <your_ecs_context_name>docker compose up

The docker context command makes it easy to export and import contexts on different machines with the Docker client installed

[note] docker-compose up builds the docker compose file containers into your local machine, however docker compose up deploys your docker compose file into your AWS

Once deployed it will create these resources (you can view all the resources created from cloudformation stack)

  • ECS Cluster
  • AWS Cloud Map
  • AWS Network Load Balancer (NLB)
  • Security Group
  • Target Group

Amazon Elastic Container Service (Amazon ECS) is highly scalable and fast container management service. You can use it to run, stop, and manage containers on a cluster. You can either run Fargate or EC2 instances. When using EC2 launch type it will provide you more control, but needs the user to manage, provision, scale and patch the virtual machines. Whereas Fargate is serverless, meaning you no longer have to provision, configure, and scale clusters of virtual machines to run containers. After you define the application requirements (compute and memory resources), Fargate will manage scaling in order to run the containers in a highly-available environment. It can simultaneously launch thousands of containers and scale to run mission-critical applications

ECS Cluster is a logical grouping of tasks or services. Your tasks and services run on infrastructure that is registered to a cluster. A Cluster can run many services

Let us have a quick look to the image below
Image description
As we can see our cluster is holding our ECS container instance which in our case is Fargate, and for each container we have similar 4 boxes (the green one) and four of them are inside one cluster. The Task is within service, and it can run multiple docker containers

ECS Task Definition your containers are defined in a task definition that you use to run an individual task or tasks within a service. One Task Definition can create several identical Tasks

AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and it maintains the updated location of these dynamically changing resources

Our Cloud Map:
Image description

Network Load Balancer distributes end user traffic across multiple cloud resources to ensure low latency and high throughput. In our example since we exposed one of our containers to port 8080:8080 an NLB is created, but if we have exposed it to 80:80 hence an ALB would have been created (https://docs.docker.com/cloud/ecs-compose-examples/)

Security Group acts as a virtual firewall for your instances to control incoming and outgoing traffic. Both inbound and outbound rules control the flow of traffic to and traffic from your instance, respectively

Our Inbound rules:
Image description

Target Group tells a load balancer where to direct traffic to

As we can see our architecture how it is decoupled and now each part can be scaled on its own. Hope through this blog I was able to clarify some of the main components in building distributed systems on AWS.

For Later on I will add many features on the current architecture, and dive deeper on specific topics
The Source code: https://github.com/awedis/containers-architecture


Original Link: https://dev.to/aws-builders/building-distributed-systems-on-aws-1f71

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To