Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
October 11, 2021 05:34 pm GMT

Best Practices Design Patterns: Optimizing Amazon S3 Performance

Performance Guidelines for Amazon S3:-

Measure Performance:

When optimizing performance, look at network throughput, CPU, and Dynamic Random Access Memory (DRAM) requirements.

Depending on the mix of demands for these different resources, it might be worth evaluating different Amazon EC2 instance types.

Its also helpful to look at DNS lookup time, latency, and data transfer speed using HTTP analysis tools.

Scale Storage Connections Horizontally:

Spreading requests across many connections is a common design pattern to horizontally scale performance.

Think of Amazon S3 as a very large distributed system, not as a single network endpoint.

You can achieve the best performance by issuing multiple concurrent requests to Amazon S3. Spread these requests over separate connections to maximize the accessible bandwidth from Amazon S3.

Amazon S3 doesn't have any limits for the number of connections made to your bucket.

Use Byte-Range Fetches :

Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion.

You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object.

This helps you achieve higher aggregate throughput versus a single whole-object request.

Fetching smaller ranges of a large object also allows your application to improve retry times when requests are interrupted.

Typical sizes for byte-range requests are 8 MB or 16 MB. If objects are PUT using a multipart upload, its a good practice to GET them in the same part.

Retry Requests for Latency-Sensitive Applications :

Aggressive timeouts and retries help drive consistent latency.

Given the large scale of Amazon S3, if the first request is slow, a retired request is likely to take a different path and quickly succeed.

Combine S3 (Storage) and EC2 (Compute) in the Same AWS Region :

AWS recommend that you access the bucket from Amazon EC2 instances in the same AWS Region when possible, This helps reduce network latency and data transfer costs.

Use Amazon S3 Transfer Acceleration to Minimize Latency Caused by Distance :

Amazon S3 Transfer Acceleration manages fast, easy, and secure transfers of files over long geographic distances between the client and an S3 bucket.

Transfer Acceleration takes advantage of the globally distributed edge locations in Amazon CloudFront.

Transfer Acceleration is ideal for transferring gigabytes to terabytes of data regularly across continents.

It's also useful for clients that upload to a centralized bucket from all over the world.

You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated and non-accelerated upload speeds across Amazon S3 Regions.

Use the Latest Version of the AWS SDKs :

The SDKs provide a simpler API for taking advantage of Amazon S3 from within an application and are regularly updated to follow the latest best practices.

The SDKs also provide the Transfer Manager, which automates horizontally scaling connections to achieve thousands of requests per second, using byte-range requests where appropriate.

You can also optimize performance when you are using HTTP REST API requests.

Performance Design Patterns for Amazon S3:-

Using Caching for Frequently Accessed Content :

If a workload is sending repeated GET requests for a common set of objects, you can use a cache such as Amazon CloudFront, Amazon ElastiCache, or AWS Elemental MediaStore to optimize performance.

Successful cache adoption can result in low latency and high data transfer rates, Applications that use caching also send fewer direct requests to Amazon S3, which can help reduce request costs.

Amazon CloudFront is a fast content delivery network (CDN) that transparently caches data from Amazon S3 in a large set of geographically distributed points of presence (PoPs), when objects might be accessed from multiple Regions, or over the internet, CloudFront allows data to be cached close to the users that are accessing the objects.

Amazon ElastiCache is a managed, in-memory cache, With ElastiCache, you can provision Amazon EC2 instances that cache objects in memory, this caching results in orders of magnitude reduction in GET latency and substantial increases in download throughput.

AWS Elemental MediaStore is a caching and content distribution system specifically built for video workflows and media delivery from Amazon S3, MediaStore provides end-to-end storage APIs specifically for video, and is recommended for performance- sensitive video workloads.

Timeouts and Retries for Latency-Sensitive Applications :

There are certain situations where an application receives a response from Amazon S3 indicating that a retry is necessary.

If an application generates high request rates (typically sustained rates of over 5,000 requests per second to a small number of objects), it might receive HTTP 503 slowdown responses, If these errors occur, each AWS SDK implements automatic retry logic using exponential backoff, If you are not using an AWS SDK, you should implement retry logic when receiving the HTTP 503 error.

Amazon S3 automatically scales in response to sustained new request rates, dynamically optimizing performance, While Amazon S3 is internally optimizing for a new request rate, you will receive HTTP 503 request responses temporarily until the optimization completes, After Amazon S3 internally optimizes performance for the new request rate, all requests are generally served without retries.

For latency-sensitive applications, Amazon S3 advises tracking and aggressively retrying slower operations. When you retry a request, we recommend using a new connection to Amazon S3 and performing a fresh DNS lookup.

When you make large variably sized requests (more than 128 MB), we advise tracking the throughput being achieved and retrying the slowest 5 percent of the requests.

When you make smaller requests (less than 512 KB), where median latencies are often in the tens of milliseconds range, a good guideline is to retry a GET or PUT operation after 2 seconds.

If your application makes fixed-size requests to Amazon S3, you should expect more consistent response times for each of these requests, In this case, a simple strategy is to identify the slowest 1 percent of requests and to retry them, Even a single retry is frequently effective at reducing latency.

Horizontal Scaling and Request Parallelization for High Throughput :

AWS encourage you to horizontally scale parallel requests to the Amazon S3 service endpoints, In addition to distributing the requests within Amazon S3, this type of scaling approach helps distribute the load over multiple paths through the network.

For high-throughput transfers, Amazon S3 advises using applications that use multiple connections to GET or PUT data in parallel.

You can use the AWS SDKs to issue GET and PUT requests directly rather than employing the management of transfers in the AWS SDK. This approach lets you tune your workload more directly, while still benefiting from the SDKs support for retries and its handling of any HTTP 503 responses that might occur.

When you download large objects within a Region from Amazon S3 to Amazon EC2, AWS suggest making concurrent requests for byte ranges of an object at the granularity of 816 MB.

If your application issues requests directly to Amazon S3 using the REST API, we recommend using a pool of HTTP connections and re-using each connection for a series of requests. Avoiding per-request connection setup removes the need to perform TCP slow-start and Secure Sockets Layer (SSL) handshakes on each request.

Its worth paying attention to DNS and double-checking that requests are being spread over a wide pool of Amazon S3 IP addresses.

Using Amazon S3 Transfer Acceleration to Accelerate

Geographically Disparate Data Transfers :

Amazon S3 Transfer Acceleration is effective at minimizing or eliminating the latency caused by geographic distance between globally dispersed clients and a regional application using Amazon S3.

Transfer Acceleration uses the globally distributed edge locations in CloudFront for data transport.

The edge network also helps to accelerate data transfers into and out of Amazon S3, It is ideal for applications that transfer data across or between continents, have a fast internet connection, use large objects, or have a lot of content to upload.

In general, the farther away you are from an Amazon S3 Region, the higher the speed Improvement you can expect from using Transfer Acceleration.

You can set up Transfer Acceleration on new or existing buckets.

The best way to test whether Transfer

Acceleration helps client request performance is to use the Amazon S3 Transfer Acceleration Speed Comparison tool.

Network configurations and conditions vary from time to time and from location to location, so you are charged only for transfers where Amazon S3 Transfer Acceleration can potentially improve your upload performance.


Original Link: https://dev.to/awsmenacommunity/best-practices-design-patterns-optimizing-amazon-s3-performance-186j

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To