Scott Hanselman

Cloud Power: How to scale Azure Websites globally with Traffic Manager

May 06, 2014 Comment on this post [26] Posted in Azure
Sponsored By

The "cloud" is one of those things that I totally get and totally intellectualize, but it still consistently blows me away. And I work on a cloud, too, which is a little ironic that I should be impressed.

I guess part of it is historical context. Today's engineers get mad if a deployment takes 10 minutes or if a scale-out operation has them waiting five. I used to have multi-hour builds and a scale out operation involved a drive over to PC Micro Center. Worse yet, having a Cisco engineer fly in to configure a load balancer. Certainly engineers in the generation before mine could lose hours with a single punch card mistake.

It's the power that impresses me.

And I don't mean CPU power, I mean the power to build, to create, to achieve, in minutes, globally. My that's a lot of comma faults.

Someone told me once that the average middle class person is more powerful than a 15th century king. You eat on a regular basis, can fly across the country in a few hours, you have antibiotics and probably won't die from a scratch.

Cloud power is that. Here's what I did last weekend that blew me away.

Here's how I did it.

Scaling an Azure Website globally in minutes, plus adding SSL

I'm working on a little startup with my friend Greg, and I recently deploy our backend service to a small Azure website in "North Central US." I bought a domain name for $8 and setup a CNAME to point to this new Azure website. Setting up custom DNS takes just minutes of course.

CNAME Hub DNS

Adding SSL to Azure Websites

I want to run my service traffic over SSL, so I headed over to DNSimple where I host my DNS and bought a wildcard SSL for *.mydomain.com for only $100!

Active SSL Certs

Adding the SSL certificate to Azure is easy, you upload it from the Configure tab on Azure Websites, then binding it to your site.

SSL Bindings

Most SSL certificates are issued as a *.crt file, but Azure and IIS prefer *.pfx. I just downloaded OpenSSL for Windows and ran:

openssl pkcs12 -export -out mysslcert.pfx -inkey myprivate.key -in myoriginalcert.crt

Then I upload mysslcert.pfx to Azure. If you have intermediaries then you might need to include those as well.

This gets me a secure connection to my single webserver, but I need multiple ones as my beta testers in Asia and Europe have complained that my service is slow for them.

Adding multiple global Azure Website locations

It's easy to add more websites, so I made two more, spreading them out a bit.

Multiple locations

I use Git deployment for my websites, so I added two extra named remotes in Git. That way I can deploy like this:

>git push azure-NorthCentral master
>git push azure-SoutheastAsia master
>git push azure-WestEurope master

At this point, I've got three web sites in three locations but they aren't associated together in any way.

I also added a "Location" configuration name/value pair for each website so I could put the location at the bottom of the site to confirm when global load balancing is working just by pulling it out like this:

location = ConfigurationManager.AppSettings["Location"];

I could also potentially glean my location by exploring the Environment variables like WEBSITE_SITE_NAME for my application name, which I made match my site's location.

Now I bring these all together by setting up a Traffic Manager in Azure.

Traffic Manager

I change my DNS CNAME to point to the Traffic Manager, NOT the original website. Then I make sure the traffic manager knows about each of the Azure Website endpoints.

Then I make sure that my main CNAME is setup in my Azure Website, along with the Traffic Manager domain. Here's my DNSimple record:

image

And here's my Azure website configuration:

Azure Website Configuration

Important Note: You may be thinking, hang on, I though there was already load balancing built in to Azure Websites? It's important to remember that there's the load balancing that selects which data center, and there's the load balancing that selects an actual web server within a data center. 
Also, you can choose between straight round-robin, failover (sites between datacenters), or Performance, when you have sites in geographic locations and you want the "closest" one to the user. That's what I chose. It's all automatic, which is nice.

Azure Traffic Manager

Since the Traffic Manager is just going to resolve to a specific endpoint and all my endpoints already have a wildcard SSL, it all literally just works.

When I run NSLOOKUP myHub I get something like this:

>nslookup hub.mystartup.com
Server: ROUTER
Address: 10.71.1.1

Non-authoritative answer:
Name: ssl.mystartup-northcentralus.azurewebsites.net
Address: 23.96.211.345
Aliases: hub.mystartup.com
mystartup.trafficmanager.net
mystartup-northcentralus.azurewebsites.net

As I'm in Oregon, I get the closest data center. I asked friends via Skype in Australia, Germany, and Ireland to test and they each got one of the other data centers.

I can test for myself by using https://www.whatsmydns.net and seeing the different IPs from different locations.

Global DNS

This whole operation took about 45 minutes, and about 15 minutes of that was waiting for DNS to propagate.

In less than an hour went from a small prototype in a data center in Chicago and then scaled it out to datacenters globally and added SSL.

Magical power.

Related Links


Sponsor: Big thanks to Aspose for sponsoring the blog feed this week. Aspose.Total for .NET has all the APIs you need to create, manipulate and convert Microsoft Office documents and a host of other file formats in your applications. Curious? Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service
May 06, 2014 5:45
Awesome! But... why you no use `CloudConfigurationManager.GetSetting()`?!
May 06, 2014 6:08
This is nice... awesome! Thanks.
May 06, 2014 6:09
Awesome!
May 06, 2014 6:56
Mike - Because this is a Website, not a cloud service.
May 06, 2014 10:04
You didn't quite touch on this point, and its one I question I know some Ops folks may have:

You installed the SSL cert in central US to start, did you also install it in the other regions or were those different certs? At the end, Did you serve the certs from the load-balanced servers or did you reconfigure the load balancer to serve the SSL cert?


Great article.
May 06, 2014 10:35
Jeff, from what I read Scott uses a wildcard cert, hence he can serve it up from the different websites around the world.

//Morten
May 06, 2014 11:26
That's a good point. Shouldn't it work with a normal, non-wildcard cert either?
Thanks for sharing this Scott.

- Roland
May 06, 2014 12:29
Yes, sorry about that. I got a wildcard for convenience. It's not required. I installed it everywhere at the website level. The traffic manager doesn't know about it.
May 06, 2014 12:44
What sort of costs are involved with hosting a site like this?
May 06, 2014 12:44
Scott - Question: What about data availability in the other locations when using something like Azure Table Storage (not unique to ATS, but using as example)? Initially the site is in North Central, along with the storage account and table storage data. Once you scale it out to Europe and SE Asia, is there a way to also have the data available in those locations? If not, there seem to be at least two concerns that impact performance and cost:
  • would there not be increased latency and reduced performance accessing and querying table storage data from Europe/ Asia/ whereever and North Central datacenters?

  • If there is no way to have the data within the serving DC, wouldn't charges now apply for data transfer between data centers, specifically for every request out of Europe/ Asia and then cost for sending the data out of the north central DC.


  • I am hoping you or someone is able to point out a solution for the above issues or better still that I am wrong or confused because needing to access data is not an uncommon scenario :)
    BB
    May 06, 2014 12:49
    Just to ensure that they are all in sync, couldn't you have used David Ebbo's Site-Extension to replicate the site to the different other sites (although this may be expensive in the long run) or just let them fetch from the same git repo :)



    //Dennis
    May 06, 2014 15:14
    Scott, how would you handle the data side of things in this scenario, e.g. I'm assuming you are using Azure SQL Server or something? I can't quite fathom how the data replication and access should work in a geo load balance environment.
    May 06, 2014 17:37
    Hi, I have the same q as Beyers, its great to scale the websites across the globe but what if the data is dependent and must be in sync? Its too much of a performance hit to go across diff data centers for sqlazure. Be great to get your thoughts on this.

    Great article btw.

    Isuru
    May 06, 2014 19:08
    Hi Scott,

    I have the same question as BB and Beyers Cronje.

    If I have a SQL Azure on the same data center where the original site is, what would be the best approach to replicate to the other two? I don't think that having the server on Asia query the server on NorthCentral US is going to be that good for performance
    May 06, 2014 22:37
    I always see web roles - worker roles vs websites - webjobs

    My questions are:
    When should I use one or the other?
    Do I need to use them together, or is there a case where I should use websites with worker roles for example?

    If I put all of the updates in the webjobs/worker roles, and I have one storage for each website, should I use the secondary storage connection string or should I replicate the data between the storages manually?
    May 07, 2014 0:18
    Azure WebSites is super sweet.

    As for deploying to multiple sites - you could, instead of pushing straight from your local git repo, setup deployment from github/bitbucket(/dropbox?), so that you will only push once, and have webhooks take care of publishing the newest and sweetest code to all locations!

    Azure WebSites *is* sweet.
    May 07, 2014 18:15
    Scott, any reason why you didn't use Kudu/SCM extensions to globally sync the code? I learned about it at build 14 and leverage the heck out of it. I've found that it reduces the possibility of some "oops" errors when doing manual deployment to multiple nodes.
    May 09, 2014 19:10
    @Scott - any ideas/ insights regarding the questions regarding data (table storage & other) being synced in the other data centers? The increased cost for data leaving location(s) and reduced speed in accessing across DCs appears to be a problem with global scaling.
    BB
    May 10, 2014 7:47
    Like the guys above, I would love to hear your thoughts RE the persistence layer handling. In my case horizontal partitioning the dbs and having multiple instances in each of the regions works well for me performance and scalability wise (for my specific use case). If it didn't I would probably run a distributed cache instance (memcached my pref) in each region and pre-emptive populate my queries and domain objects out of process from a single (region wise) SQL server cluster. Admittedly write executions would still be delayed, however I prioritize the importance of read performance (searches, web page views) over data entry - where people typically accept(or at least more so of) a slightly enlarged delay. And again I'd pump messages and run out of process any heavy db related activity where permitting. Please show me the light Scott (kreloses on a bus from Malaysia to singapore - saving his sanity with your blog).
    May 16, 2014 20:34
    Scott,

    Just curious what you're using on the backend? SQL Database? Or are you running your own SQL Server VMs? Or, are you using something NoSQL like Mongo?

    Just curious what route you went for persistence.
    May 18, 2014 14:56
    Great article showing how simple it is to scale out across data centres, but as others have said surely the tricky bit is the state management, in my case primarily sql azure. One assumes you either sync the data continuously or rely on a single instance but incur bandwidth expense and a potential bottleneck. I guess the best solutions will come down to individual requirements (performance v availability v cost) and on existing application architectures.
    June 10, 2014 8:01
    I agree BB - Scott can you please answer his question. Global Traffic manager is useless if the back-end data is required to keep synchronized.

    My assumption is that I need employ some method data synchronizing on the back-end or try to use front-end caching solutions to keep as much data on the edge and possibly leave the profile/order data at one location and leave the fluff up front.

    D.
    June 12, 2014 16:33
    agreed - how are storage and sql azure handled in this ? I have a single website which accesses a single db and single storage account and I just want a list of regions with checkboxes and a big fat 'activate 600ms page loads around planet earth now' button. You could always add other planets later.
    June 12, 2014 19:44
    options I see:

    1) Create a website and sql db for each region, then create a sync group for the sql dbs, then add a cdn that uses your storage account, then do the traffic manager stuff above.

    2) As above but leave as single db and create VNETS between the regions.

    In both cases could also use new redis cache on top.

    In our situation it looks like there may be some room for reducing costs by choosing smaller website sizes and locally redundant sql db (in the case of using multiple dbs + sync group) now that traffic will be load balanced ?

    Or pay for the tech support package. BUT a follow up post on the data side of things would obviously be well received.
    July 04, 2014 20:38
    You never say how you generated the original CSR or where you got the private .key file.
    July 11, 2014 17:44
    Great article, but I would like to hear more about your thoughts for a solution to the SQL Azure question. Having fast websites is great, but what about the SQL database?

    Thanks,

    Karl

    Comments are closed.

    Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.