Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
October 18, 2022 07:26 pm GMT

Geo-Distributed API Layer With Kong Gateway

Ahoy, Mateys!

Its been a while since I published my last article on my geo-distributed application development journey. A hot tech conference season put my development on hold, but now Im back home and ready to share my experience in building a geo-distributed API layer with Kong Gateway.

Let me take a moment to review where the project had gotten to. This is what my geo-distributed messengers architecture looked like:

Image description

I managed to automate the deployment of application instances across several cloud regions (to see how the project started and continued, here is a list of all previous articles. However, the application is not a monolith; its comprised of several microservices.

Image description

The Messaging microservice implements the key functionality that every messenger must possessthe ability to send messages across channels and workspaces. The Attachments microservice uploads pictures and other files. Finally, the Reminders microservice comes in handy when a user wants to ignore a message for now but get back to it later.

So now, mateys, we reach the question, What do all microservices need?

Well, they need to talk to each other! Microservices were not born to live in isolation. After following Kong for more than a year, I selected it to create a geo-distributed API layer that can be used as a synchronous communication layer by the microservices.

In this article, well review several ways to build a geo-distributed API layer with Kong. My geo-messenger needs an API layer that can withstand various cloud outages (including regional ones) and service requests from the locations closest to the users.

So, if youre still with me on this journey, then, as the pirates used to say, Weigh Anchor and Hoist the Mizzen! which means, Pull up the anchor and get this ship sailing!

Connecting Microservices With Kong Gateway

For the sake of simplicity, Ill use two microservice as an example.

Image description

A user request always goes to the Messaging microservice first, the key one. But, if the user wants to share a picture in a chat channel, for example, the Messaging microservice delegates this job to its Attachments counterpart. In this case, the Kong Gateway is used as a communication channel.

As long as my microservices instances are deployed across multiple cloud regions, I have to go through the same exercise for Kong Gateway.

Lets discuss the options for running Kong in a geo-distributed way.

Option #1: Deploying Kong as Sidecar Nearby Microservices

The most straightforward way is to deploy Kong Gateway as a sidecar on the same virtual machine (VM) with your microservices. Then you can deploy the VMs with Kong and the microservices across several cloud regions. (NOTE: you can use Kubernetes instead if you like; there are no major architectural differences).

Image description

In this example, the geo-messenger uses two VM instancesone in the US East and another in the US West. Each VM runs an instance of the microservice and Kong Gateway. In addition, every VM has its own instance of PostgreSQL which is used solely by Kong to store information about microservices and supported routes.

Google Cloud Storage and YugabyteDB are geo-distributed by definition. Google Cloud Storage keeps attachments that users upload through the Attachments microservice, while YugabyteDB keeps all other application data (messages, channels, reminders, etc.)

The user is connected to the VM closest to their location. In this example, the user is based near New York, so her requests are routed to the VM in the US East region. This is how I achieve the lowest latency possible.

If the US East region becomes unavailable due to a region-level outage, then Ill lose the VM from that region. But the messenger will continue working with no interruptions because the user requests will now be routed to the VM in the US West.

Image description

The Show, Dont Tell!

Now, let me demonstrate this deployment option.

When I start a VM in Google Cloud, the VM executes a special startup script that installs required libraries, builds and runs the microservices, and configures Kong Gateway. These are the steps for Kong setup:

  • Download and install Kong Gateway:
  echo "deb [trusted=yes] https://download.konghq.com/gateway-3.x-ubuntu-$(lsb_release -sc)/ \  default all" | sudo tee /etc/apt/sources.list.d/kong.list  sudo apt-get update  sudo apt install --yes --force-yes kong-enterprise-edition=3.0.0.0
  • Configure Postgres and run Kong migrations:
  sudo -u postgres psql -c "CREATE USER kong WITH PASSWORD 'password'"  sudo -u postgres psql -c "CREATE DATABASE kong OWNER kong"  sudo touch /etc/kong/kong.conf  echo 'pg_user = kong' | sudo tee --append /etc/kong/kong.conf  echo 'pg_password = password' | sudo tee --append /etc/kong/kong.conf  echo 'pg_database = kong' | sudo tee --append /etc/kong/kong.conf  sudo kong migrations bootstrap -c /etc/kong/kong.conf  sudo kong migrations up -c /etc/kong/kong.conf
  • Create a Kong route for attachments uploading:
  curl -i -X POST \          --url http://localhost:8001/services/ \          --data 'name=attachments-service' \          --data 'url=http://127.0.0.1:8081'  curl -i -X POST http://localhost:8001/services/attachments-service/routes \       -d "name=upload-route" \       -d "paths[]=/upload" \    -d "strip_path=false"

Note, all IPs are local because Kong runs on the same VM with my microservices.

Since you have that startup script that prepares a VM with a required configuration, you need to figure out how to deploy such VMs across multiple cloud regions.

Easy! This step can be automated as well:

  1. Call this script to create a VM instance template in the regions of your choice. I create templates for US West, Central, and East.
  2. Provision VMs with Kong and microservices in all those regions:
  gcloud compute instance-groups managed create ig-us-west \      --template=template-us-west --size=1 --zone=us-west2-b  gcloud compute instance-groups managed create ig-us-central \      --template=template-us-central --size=1 --zone=us-central1-b  gcloud compute instance-groups managed create ig-us-east \   --template=template-us-east --size=1 --zone=us-east4-b

Next, make sure that the VMs are booted and run normally across the cloud regions:

Image description

Finally, pick any of the VMs and test the application by uploading a picture into a chat channel. The Messaging microservice calls the configured Kong route to delegate this task to the Attachments microservice.

Image description

Easy, mateys, isnt it? Now, lets review two other alternatives for Kong geo-distributed deployments.

Option #2: Kong Standalone Deployment With Shared Database

Another approach is better for applications with a significant number of microservices, or that need to deploy microservices and the API layer on separate VMs (or Kubernetes nodes/pods).

Image description

As the diagram shows, we continue running our application across two cloud regions US East and West. We also continue using Google Cloud Storage and YugabyteDB for application data and attachments. But now, the microservice and Kong Gateway instances are deployed in standalone VMs.

Every microservice and Kong instance is location-aware. For example, when the user from New York wants to share a picture, the request will go to the Messaging instance in the US East (which is closest to the user). Then, that Messaging instance must forward the request to the Attachments service from the same cloud region.

This happens by connecting to the Kong instance from US East and calling the /api/us/east/upload API endpoint. That endpoint is a regional one (see /us/east part in the API URL) and is set up to forward requests to the Attachments microservices from US East.

Finally, Kong stores all the information about routes and services in a shared instance of PostgreSQL. If you use a PostgreSQL-managed service like Google Cloud SQL, its possible to configure PostgreSQL replica instances in a different availability zone.

Cloud SQL will failover to the replica if a zone running the primary PostgreSQL instance becomes unavailable. The primary and replica must be in the same cloud region. Presently, Google Cloud SQL doesnt tolerate region-level outages.

Option #3: Kong Standalone Deployment With Distributed Database

Deployment option #3 is close to deployment option #2. The only exception is the database where Kong Gateway stores details about routes, services, and other system information. Option #2 used PostgreSQL, while this third option uses Apache Cassandra.

Image description

Kong supports Apache Cassandra for geo-distributed use cases. A Cassandra cluster can span multiple cloud regions, distributing the load across several distant locations and withstanding region-level outages that are not supported by Google Cloud SQL (managed PostgreSQL).

However, starting with Kong 4.0, PostgreSQL will be the only officially supported database. The Cassandra integration is already deprecated.

The Kong community will likely find a replacement for Cassandra in the foreseeable future. And that replacement might be YugabyteDB, which is built on the PostgreSQL source code and frequently referred to as distributed PostgreSQL.

Whats on the Horizon?

Alright, mateys! Now Ive got my microservices and API layer working across multiple cloud regions, whats next?

Well, I need to figure out how to configure the Global Cloud Load Balancer that intercepts user requests at the nearest PoP (point-of-presence) and then routes the requests to one of my application instances (VMs).

Follow me to be notified as soon as the next update is published! And check out the previous articles in my development journey.


Original Link: https://dev.to/yugabyte/geo-distributed-api-layer-with-kong-gateway-4d25

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To