Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
August 29, 2021 04:06 am GMT

Memoizing async functions in Javascript

This article has been originally published at StackFull. If you'd like to be informed when I drop more such articles, consider subscribing to the newsletter.

Memoization is a useful concept. It helps avoid time taking or expensive calculations, after it's been done once. Applying memoization to a synchronous function is relatively straightforward. This article aims to dive into the problems and their solutions while trying to memoize async functions. We'll be using fetch API as example for this excercise.

Let's start by walking through an example scenario to understand the problem we'll be solving using memoization of fetch calls. Subsequently we'll see how the same idea can be extended for callbacks and other async operations.

Example scenario

Let's say we're building an application which lists all characters of "Rick and Morty". Something like this:

Screenshot 2021-08-29 at 6.22.13 AM.png

We've got an API that returns list of all characters:

GET  /api/character----------------------------------{  "results": [    {      "id": 361,      "name": "Toxic Rick",      "status": "Dead",      "species": "Humanoid",      "type": "Rick's Toxic Side",      "gender": "Male",      "origin": {        "name": "Alien Spa",        "url": "https://rickandmortyapi.com/api/location/64"      },      "location": {        "name": "Earth",        "url": "https://rickandmortyapi.com/api/location/20"      },      "image": "https://rickandmortyapi.com/api/character/avatar/361.jpeg",      "episode": [        "https://rickandmortyapi.com/api/episode/27"      ],      "url": "https://rickandmortyapi.com/api/character/361",      "created": "2018-01-10T18:20:41.703Z"    },    // ...  ]}

If you notice, we've got all data points required here (to render the details for each character, as in screenshot above), except the "First seen in" part. We need to show the name of first episode for the character here. In the response above, we are getting the API endpoint for that episode instead of name itself. Here's how this API looks like:

GET /api/episode/:episodeId--------------------------------------{  "id": 28,  "name": "The Ricklantis Mixup",  "air_date": "September 10, 2017",  "episode": "S03E07",  "characters": [    "https://rickandmortyapi.com/api/character/1",    "https://rickandmortyapi.com/api/character/2",    // ...  ],  "url": "https://rickandmortyapi.com/api/episode/28",  "created": "2017-11-10T12:56:36.618Z"}

Alright, it still look pretty simple to render the required details using provided APIs. We can just call /api/character, wait for it's response, then call /api/episode/:episodeId API based on previous response and then collect the required info to render final result.

Screenshot 2021-08-29 at 6.39.00 AM.png

This approach works fine if we're trying to render just one character. Let's see what happens when we render 4 characters:

Screenshot 2021-08-29 at 6.48.23 AM.png

In this example scenario, 3 out of 4 characters have same first episode thus we're making 4 separate calls to the same endpoint. That's obviously something we could have avoided. We could make call to this endpoint just once and memoize it, letting subsequent 2 calls get the results instantly without reaching out to the server. Also notice how the diagram above doesn't seem properly aligned. That's intended to show that all parallel API calls may not resolve at the same time. It's an important detail we need to keep in mind while memoizing data coming from async events.

Now that the problem statement is clear. Let's dive into it's solution.

Memoization

Let's start with memoization using a pure function. Let's say we have a function called getSquare, which returns square of the given:

  function getSquare(x){     return x * x  }

To memoize this we can do something like this:

const memo = {}function getSquare(x){    if(x in memo) {      return memo[x]    }    memo[x] = x * x    return memo[x]}

So, with few lines of code we've memoized our getSquare function.

Memoizing Fetch

Memoizing the promise:

Simplest way of memozing fetch calls would be to keep track the promise issued against a specific URL. Here's how it would look like:

const cache = {}function memoFetch(url){  if(url in cache) {    return cache[url]  }  cache[url] = fetch(url).then(res => res.json())  return cache[url]}

The code is fairly simple and self explanatory here. Let's look at another approach for achieving the same result, this time by memoizing the actual response instead of underlying promise.

Memoizing the response:

Using the same concept as above we'll try to create a function memoFetch which is memoized version of fetch. So calling memoFetch with same URL multiple times should ensure that actual API is only called once. It sounds pretty simple. Let's see:

const memo = {}function memoFetch(url){    if( url in memo )         return Promise.resolve( memo[url] )    return new Promise((resolve, reject) => {       fetch(url)      .then(response => response.json())      .then(data => {          memo[url] = data          resolve(data)       })      .catch(error => reject(error))    })}

That was easy. But it doesn't solve the whole problem. Since fetch calls are async, what happens when first request is still in progress and a new fetch call is issued for the same url? We end up making two calls for the same resource. Let's try to address that problem.

Handling parallel requests

We can create a new hashmap (an object or Map in JS), to keep track of which URLs are being fetched. In this hashmap, we'll keep track of all enqueued requests against the same URL and once the API call goes through we'll process all the items in queue. Let's call this hashmap progressQueue.

  const memo = {}  const progressQueue = {}  function memoFetch(url){    return new Promise((resolve, reject) => {      // if the response has already been fetched before, simply resolve with that response and exit      if(url in memo){        resolve(memo[url])        return;      }      if(!progressQueue[url]){        // fetching new URL, create an entry for it in progressQueue        progressQueue[url] = [[resolve, reject]]      } else {       // received a new request for a URL that's still in progress, enqueue this request and exit. Since request is already in progress        progressQueue[url].push([resolve, reject]);        return;      }      fetch(url)        .then(response => response.json())        .then(data => {            memo[url] = data;            // process all the enqueued items after successful fetch            for(let [resolver, ] of progressQueue[url])              resolver(data)            cache[url] = data        })        .catch(error => {           // process all the enqueued items after failed fetch           for(let [, rejector] of progressQueue[url])              rejector(result);           cache[url] = error         })        .finally(() => {          // clean up progressQueue           delete progressQueue[url]         })    })  }

Putting it in action

Let's try this:

const episodes = [   "https://rickandmortyapi.com/api/episode/28",   "https://rickandmortyapi.com/api/episode/28",   "https://rickandmortyapi.com/api/episode/28",   "https://rickandmortyapi.com/api/episode/13",   "https://rickandmortyapi.com/api/episode/19"];// before memoizationfor(let episode of episodes){  fetch(episode)  .then(response => response.json())  .then(data => {      // do something with data      console.log('Done!')   })}// after memoizationfor(let episode of episodes){  memoFetch(episode)  .then(data => {      // do something with data      console.log('Done!')   })}

Here's how the difference looks like when looking through the network tab:

1- Before memoization:

Screenshot 2021-08-29 at 7.46.02 AM.png

2- After memoization:

Screenshot 2021-08-29 at 7.48.50 AM.png

Extending idea to other async operations

The second approach can be extended to other async operations as well. Let's take the example of $.ajax from jQuery. Here's how the function works for a simple GET call:

$.ajax(url, {   success(data){     // do something     console.log(data)   }})

Here's a rough example for how the we can create memoAjax function for this case:

const cache = {}, progressQueue = {}function memoAjax(url, config){   if(url in cache){         if(cache[url].success) config.success(cache[url].data)         config.error(cache[url].data)         return    }     if(!progressQueue[url]){        // fetching new URL, create an entry for it in progressQueue        progressQueue[url] = [config]      } else {       // received a new request for a URL that's still in progress, enqueue this request and exit. Since request is already in progress        progressQueue[url].push(config);        return      }      $.ajax(url, {           success(data){                // process all the enqueued items after successful fetch                for(let {success} of progressQueue[url]) success(data)                cache[url] = {success: true, data}              }           error(errorData){               // process all the enqueued items after failed fetch               for(let {error} of progressQueue[url])  error(errorData)               cache[url] = {error: true, data: errorData}              }           }       }).done(() =>{           // clean up progressQueue           delete progressQueue[url]      })}

Alternative, we can convert the async function to a promise and apply the first approach i.e. memoize the underlying promise itself.

Further improvement

Since we're using an memo object to keep track of memoized response, with too many requests (and each request having a sizeable chunk of data) the size of this object may grow beyond what's ideal. To handle this scenario we can use a cache eviction policy such as LRU (Least Recently Used). It would ensure we're memoizing without crossing memory limits!


Original Link: https://dev.to/anishkumar/memoizing-fetch-api-calls-in-javascript-1d16

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To