Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
July 9, 2015 07:00 am

Why and How We Migrated Babylon.js to Azure

You’re working
for a startup. Suddenly that hard year of coding is paying off—with success
comes more growth and demand for your web app to scale.

In this
tutorial, I want to humbly use one of our more recent “success stories”
around our WebGL open-source gaming framework, Babylon.js, and its website. We’ve been excited to see so many web
gaming devs try it out. But to keep up with the demand, we knew we needed a new
web hosting solution. 

While this tutorial focuses on Microsoft Azure, many of
the concepts apply to various solutions you might prefer. We’re also going to see the various optimizations we’ve put in place to limit as much as
possible the output bandwidth from our servers to your browser.

Introduction

Babylon.js is a
personal project we’ve been working on for over a year now. As it’s a personal project (i.e.
our time and money), we’ve hosted the website, textures and 3D scenes on a
relatively cheap hosting solution using a small, dedicated Windows/IIS machine.
The project started in France, but was quickly on the radar of several 3D and
web specialists around the globe as well as some gaming studios. We were happy about
the community’s feedback, but the traffic was manageable!

For instance, between
February 2014 and April 2014, we had an average of 7K+ users/month with an average
of 16K+ pages viewed/month. Some of the events we’ve been speaking at have
generated some interesting peaks:

Spikes in user data

But the
experience on the website was still good enough. Loading our scenes wasn’t done
at stellar speed, but users weren’t complaining that much.

However,
recently, a cool guy decided to share our work on Hacker News. We were really happy for such news!
But look at what happened to the site’s connections:

Big spike in users

Game over for our little server! It slowly
stopped working, and the experience for our users was really bad. The IIS server
was spending its time serving large static assets and images, and the CPU usage
was too high. As we were about to launch the Assassin’s Creed Pirates WebGL
experience
project running on Babylon.js, it was time to switch to a more scalable
professional hosting by using a cloud solution.

But before
reviewing our hosting choices, let’s briefly talk about the specifics of our
engine and website:


  1. Everything
    is static on our website. We currently don’t have any server-side code
    running.

  2. Our scenes (.babylon JSON files) and textures (.png or .jpeg)
    files could be very big (up to 100 MB). This means that we absolutely
    needed to activate gzip compression on our .babylon scene
    files. Indeed, in our case, the pricing is going to be indexed a lot on the
    outgoing bandwidth.

  3. Drawing into the WebGL canvas needs special security checks. You
    can’t load our scenes and textures from another server without CORS enabled, for instance.

Credits: I’d like to special thank Benjamin Talmard, one of our French Azure Technical
Evangelist who helped us move to Azure.


1. Moving to Azure Web Sites & the
Autoscale Service

As we’d like
to spend most of our time writing code and features for our engine, we
don’t want to lose time on the plumbing. That’s why we immediately decided
to choose a PaaS approach and not an IaaS one.

Moreover, we
liked Visual Studio integration with Azure. I can do almost everything from my
favorite IDE. And even if Babylon.js is hosted on GitHub, we’re using Visual Studio 2013,
TypeScript and Visual Studio Online to code our engine. As a note for your
project, you can get Visual Studio Community and an Azure Trial for free.

Moving to Azure
took me approximately five minutes:


  1. I created a new Web Site in the admin page: https://manage.windowsazure.com (could be done
    inside VS too).  

  2. I took the
    right changeset from our source code repository matching the version that was
    currently online.

  3. I right-clicked
    the Web project in the Visual Studio Solution Explorer.

Visual Studio Solution Explorer

Now here comes
the awesomeness of the tooling. As I was logged into VS using the Microsoft
Account bound to my Azure subscription, the wizard let me simply choose the web
site on which I’d like to deploy.

Choose Web Site to deploy

No need to
worry about complex authentication, connection string or whatever.

Next, Next,
Next & Publish
” and a couple of minutes later, at the end of the
uploading process of all our assets and files, the web site was up and running!

On the
configuration side, we wanted to benefit from the cool autoscale service. It
would have helped a lot in our previous Hacker News scenario.

First, your
instance has be configured in Standard mode in the Scale tab.

web hosting plan mode selection

Then, you can
choose up to how many instances you’d like to automatically scale, in
which CPU conditions, and also on which scheduled times. 

In our case, we’ve
decided to use up to three small instances (1 core, 1.75 GB memory) and to
auto-spawn a new instance if the CPU goes over 80% of its utilization. We will
remove one instance if the CPU drops under 60%. The autoscaling mechanism is
always on in our case—we haven’t set specific scheduled times.

Size and scale settings

The idea is
really to only pay for what you need during specific timeframes and
loads. I love the concept. With that, we would have been able to handle
previous peaks by doing nothing thanks to this Azure service!

You've also got a quick view on the autoscaling history via the purple chart. In our case,
since we’ve moved to Azure, we never went over one instance up to now. And we’re
going to see below how to minimize the risk of falling into an autoscaling.

To conclude on
the web site configuration, we wanted to enable automatic gzip compression
on our specific 3D engine resources (.babylon and .babylonmeshdata
files). This was critical to us as it could save up to 3x the bandwidth and
thus… the price.

Web Sites are
running on IIS. To configure IIS, you need to go into the web.config
file. We’re using the following configuration in our case:

This solution
is working pretty well, and we even noticed that the time to load our scenes has
been reduced compared to our previous host. I’m guessing this is thanks to the
better infrastructure and network used by Azure datacenters.

However, I have
been thinking about moving into Azure for a while now. And my first idea wasn’t
to let web site instances serve my large assets. Since the beginning, I have been more interested in storing my assets in the blob storage better designed for
that. It would also offer us a possible CDN scenario.


2. Moving Assets Into Azure Blob
Storage, Enabling CORS, Gzip Support & CDN

The primary reason
for using blob storage in our case is to avoid loading the CPU of our web
site instances to serve them. If everything is being served via the blob
storage except a few HTML, JavaScript and CSS files, our web site instances will have
few chances to autoscale.

But this raises
two problems to solve:


  1. As the
    content will be hosted on another domain name, we will fall into the
    cross-domain security problem. To avoid that, you need to enable CORS on the
    remote domain
    (Azure Blob Storage).

  2. Azure Blob Storage doesn’t support automatic gzip compression. And we
    don’t want to lower the CPU web site usage if in exchange we’re paying three times the price because of the increased bandwidth!

Enabling CORS on Blob Storage

CORS on blob
storage has been supported for a few months now. This article, Windows Azure Storage: Introducing CORS, explains how to use Azure APIs to
configure CORS. On my side, I didn’t want to write a small app to do that. I’ve
found one on the web already written: Cynapta Azure CORS Helper – Free Tool
to Manage CORS Rules for Windows Azure Blob Storage
.

I then just
enabled the support for GET and proper headers on my container. To check if
everything works as expected, simply open your F12 developer bar and check
the console logs:

F12 console logs

As you can see,
the green log lines imply that everything is working well.

Here is a
sample case where it will fail. If you try to load our scenes from our blob
storage directly from your localhost machine (or any other domain), you’ll get
these errors in the logs:

Console errors

In conclusion,
if you see that your calling domain is not found in the “Access-Control-Allow-Origin
header with an “Access is denied” just after that, it’s because you
haven’t set your CORS rules properly. It is very important to control your
CORS rules
; otherwise, anyone could use your assets, and thus your
bandwidth, costing money without letting you know! 

Enabling Gzip Support on Our Blob Storage

As I was
telling you before, Azure Blob Storage doesn’t support automatic gzip
compression
. It also seems to be the case with competitors’ solutions like
S3. You’ve got two options to work around that:



  1. Gzip the files yourself on the client before uploading, upload it in the blob
    storage using your classic tools and set the content-encoding header
    to gzip. This solution works, but only for browsers supporting gzip (is
    there still a browser not supporting gzip anyway?). 



  2. Gzip the files yourself on the client and upload two versions in the
    blob storage
    : one with the default .extension and one with the .extension.gzip,
    for instance. Set up a handler on the IIS side that will catch the HTTP request
    from the client, check for the header accept-encoding set to gzip and serve the appropriate files based on this support. You’ll find more details
    on the code to implement in this article: Serving GZip Compressed Content from the Azure CDN.

In our case, I
don’t know any browser supporting WebGL and not gzip compression. So if the
browser doesn’t support gzip, there’s no real interest in going further, as this
probably means that WebGL is not supported either.

I’ve therefore chosen the first solution. As we don’t have a lot of scenes and we’re not
producing a new one every day, I’m currently using this manual process:


  1. Using 7-zip, I’m compressing the .babylon
    files on my machine using gzip encoding and “compression level” to “fastest”.
    The other compression levels seem to generate issues in my tests.

  2. I upload the file using CloudBerry Explorer for Microsoft Azure
    Cloud Storage
    .

  3. I manually set the HTTP header content-encoding to gzip with CloudBerry.

manually set the HTTP header content-encoding to gzip with CloudBerry

I know what
you’re thinking. Am I going to do that for all my files?!? No, you could work
on building a tool or post-build script that would automate that. For
instance, here is a little command-line tool I’ve built:

To use it, I
could do the following:

UploadAndGzipFilesToAzureBlobStorage Scenes/Espilit
C:\Boulot\Babylon\Scenes\Espilit\*.babylon* to push a scene containing multiple
files
(our incremental scenes with muliples .babylonmeshdata files).

Or simply:

UploadAndGzipFilesToAzureBlobStorage Scenes/Espilit
C:\Boulot\Babylon\Scenes\Espilit\Espilit.babylon to push a unique file.

To check that
gzip was working as expected using this solution, I’m using Fiddler. Load your content from your client
machine and check in the network traces if the content returned is really
compressed and can be uncompressed:

Fiddler Web Debugger

Enabling CDN

Once you’ve
done the two previous steps, you just need to click on a single button in the
Azure administration page to enable CDN and map it to your blob storage:

enable CDN and map it to your blob storage

It’s that
simple! In my case, I need to simply change the following URL: https://yoda.blob.core.windows.net/wwwbabylonjs/Scenes
to https://az612410.vo.msecnd.net/wwwbabylonjs/Scenes. Note that you can customize this CDN
domain to your own if you want to.

Thanks to that,
we are able to serve you our 3D assets in a very fast way as you’ll be served
from one of the node locations listed here: Azure Content Delivery Network (CDN)
Node Locations
.

Our web site is
currently hosted on the Northern Europe Azure datacenter. But if you’re coming
from Seattle, you’ll ping this server just to download our basic index.html,
index.js, index.css files and a couple of screenshots. All the 3D assets will
be served from the Seattle node just near you!

Note: All our demos
are using the fully optimized experience (blob storage using gzip, CDN and DB
caching).


3. Using HTML5 IndexedDB to Avoid
Re-Downloading the Assets

Optimizing
loading times and controlling output bandwidth costs is not just about server-side.
You can also build some logic client-side to optimize things.
Fortunately, we’ve done that since v1.4 of our Babylon.js engine. I’ve
explained in great detail how I’ve implemented the support for IndexedDB
in this article: Using IndexedDB to handle your 3D WebGL
assets: sharing feedbacks & tips of Babylon.JS
. And you’ll find how to activate it in
Babylon.js on our wiki: Caching the resources in IndexedDB.

Basically, you
just have to create a .babylon.manifest file matching the name of the .babylon
scene, and then set what you’d like to cache (textures and/or JSON scene). That’s
it.

For instance,
check what’s going on with the Hill Valley demo scene. The first time you load it, here are the requests sent:

Requests sent

153 items and
43.33 MB received
. But if you’ve accepted letting babylonjs.com “use additional storage
on your computer
”, here is what you’ll see the second time you’ll load the
same scene:

Only one request shown

1 item and 348
bytes!
We’re just
checking if the manifest file has changed. If not, we’re loading everything
from the DB and we’re saving 43+ MB of bandwidth.

For instance,
this approach is being used in the Assassin’s Creed Pirates games:

Assassins Creed Pirates game

Let’s think about
that:


  • The game
    launches almost immediately after it has been loaded once, as the assets are
    served directly from the local DB.


  • Your web storage is less stressed and less bandwidth is used—costing you less money!

Now that will
satisfy both your users and your boss!

This article is part of the web dev tech
series from Microsoft. We’re excited to share
Microsoft Edge and the new EdgeHTML rendering engine with you. Get free
virtual machines or test remotely on your Mac, iOS, Android, or Windows device
@
https://dev.modern.ie/.


Original Link:

Share this article:    Share on Facebook
No Article Link

TutsPlus - Code

Tuts+ is a site aimed at web developers and designers offering tutorials and articles on technologies, skills and techniques to improve how you design and build websites.

More About this Source Visit TutsPlus - Code