Scott Hanselman

Trying Redis Caching as a Service on Windows Azure

June 25, '14 Comments [15] Posted in Azure
Sponsored By

redis_logo

First, if you have already have an MSDN subscription (through your work, whatever) make sure to link your MSDN account and an Azure Account, otherwise you're throwing money away. MSDN subscribers get between US$50 and US$150 a month in free Azure time, plus a 33% discount on VMs and 25% off Reserved Websites.

Next, log into the Azure Preview Portal at https://portal.azure.com.  Then, go New | Redis Cache to make a new instance. The Redis Cache is in preview today and pricing details are here. both 250 meg and 1 GB caches are free until July 1, 2014 so you've got a week to party hard for free.

image

Of course, if you're a Redis expert, you can (and always could) run your own VM with Redis on it. There's two "Security Hardened" Ubuntu VMs with Redis at the MS Open Tech VMDepot that you could start with.

I put one Redis Cache in Northwest US where my podcast's website is.  The new Azure Portal knows that these two resources are associated with each other because I put them in the same resource group.

image

There's Basic and Standard. Similar to Website's "basic vs standard" it comes down to Standard you can count on, it has an SLA and replication setup. Basic doesn't. Both have SSL, are dedicated, and include auth. I'd think of Standard as being "I'm serious about my cache" and Basic is "I'm messing around."

There are multiple caching services (or Cache as a Service) on Azure.

  • Redis Cache: Built on the open source Redis cache. This is a dedicated service, currently in Preview.
  • Managed Cache Service: Built on AppFabric Cache. This is a dedicated service, currently in General Availability.
  • In-Role Cache: Built on App Fabric Cache. This is a self-hosted cache, available via the Azure SDK.

Having Redis available on Azure is nice since my startup MyEcho uses SignalR and SignalR can use Redis as the backplane for scaleout.

Redis Server managing SignalR state

Marc Gravell (with a "C") over at StackExchange/StackOverflow has done us all a service with the StackExchange.Redis client for .NET on NuGet. Getting stuff in and out of Redis using .NET is very familiar to anyone who has used a distributed Key Value store before.

  • BONUS: There's also ServiceStack.Redis from https://servicestack.net that includes both the native-feeling IRedisNativeClient and the more .NET-like IRedisClient. Service Stack also supports Redis 2.8's new SCAN operations for cursoring around large data sets.
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

IDatabase cache = connection.GetDatabase();

// Perform cache operations using the cache object...
// Simple put of integral data types into the cache
cache.StringSet("key1", "value");
cache.StringSet("key2", 25);

// Simple get of data types from the cache
string key1 = cache.StringGet("key1");
int key2 = (int)cache.StringGet("key2");

In fact, the ASP.NET team announced just last month the ASP.NET Session State Provider for Redis Preview Release that you may have missed. Also on NuGet (as a -preview) this lets you point the Session State of your existing (perhaps legacy) ASP.NET apps to Redis.

After pushing and pulling data out of Redis for a while, you'll notice how nice the new dashboard is. It gives you a great visual sense of what's going on with your cache. You see CPU and Memory Usage, but more importantly Cache Hits and Misses, Gets and Sets, as well as any extraordinary events you need to know about. As a managed service, though, there's no need to sweat the VM (or whatever) that your cache is running on. It's handled.

image

From the Azure Redis site:

Perhaps you're interested in Redis but you don't want to run it on Azure, or perhaps even on Linux. You can run Redis via MSOpenTech's Redis on Windows fork. You can install it from NuGet, Chocolatey or download it directly from the project github repository. If you do get Redis for Windows (super easy with Chocolatey), you can use the redis-cli.exe at the command line to talk to the Azure Redis Cache as well (of course!).

It's easy to run a local Redis server with redis-server.exe, test it out in develoment, then change your app's Redis connection string when you deploy to Azure.


Sponsor: Many thanks to our friends at Octopus Deploy for sponsoring the feed this week. Did you know that NuGet.org deploys with Octopus? Using NuGet and powerful conventions, Octopus Deploy makes it easy to automate releases of ASP.NET applications and Windows Services. Say goodbye to remote desktop and start automating today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Cloud Power: How to scale Azure Websites globally with Traffic Manager

May 5, '14 Comments [26] Posted in Azure
Sponsored By

The "cloud" is one of those things that I totally get and totally intellectualize, but it still consistently blows me away. And I work on a cloud, too, which is a little ironic that I should be impressed.

I guess part of it is historical context. Today's engineers get mad if a deployment takes 10 minutes or if a scale-out operation has them waiting five. I used to have multi-hour builds and a scale out operation involved a drive over to PC Micro Center. Worse yet, having a Cisco engineer fly in to configure a load balancer. Certainly engineers in the generation before mine could lose hours with a single punch card mistake.

It's the power that impresses me.

And I don't mean CPU power, I mean the power to build, to create, to achieve, in minutes, globally. My that's a lot of comma faults.

Someone told me once that the average middle class person is more powerful than a 15th century king. You eat on a regular basis, can fly across the country in a few hours, you have antibiotics and probably won't die from a scratch.

Cloud power is that. Here's what I did last weekend that blew me away.

Here's how I did it.

Scaling an Azure Website globally in minutes, plus adding SSL

I'm working on a little startup with my friend Greg, and I recently deploy our backend service to a small Azure website in "North Central US." I bought a domain name for $8 and setup a CNAME to point to this new Azure website. Setting up custom DNS takes just minutes of course.

CNAME Hub DNS

Adding SSL to Azure Websites

I want to run my service traffic over SSL, so I headed over to DNSimple where I host my DNS and bought a wildcard SSL for *.mydomain.com for only $100!

Active SSL Certs

Adding the SSL certificate to Azure is easy, you upload it from the Configure tab on Azure Websites, then binding it to your site.

SSL Bindings

Most SSL certificates are issued as a *.crt file, but Azure and IIS prefer *.pfx. I just downloaded OpenSSL for Windows and ran:

openssl pkcs12 -export -out mysslcert.pfx -inkey myprivate.key -in myoriginalcert.crt

Then I upload mysslcert.pfx to Azure. If you have intermediaries then you might need to include those as well.

This gets me a secure connection to my single webserver, but I need multiple ones as my beta testers in Asia and Europe have complained that my service is slow for them.

Adding multiple global Azure Website locations

It's easy to add more websites, so I made two more, spreading them out a bit.

Multiple locations

I use Git deployment for my websites, so I added two extra named remotes in Git. That way I can deploy like this:

>git push azure-NorthCentral master
>git push azure-SoutheastAsia master
>git push azure-WestEurope master

At this point, I've got three web sites in three locations but they aren't associated together in any way.

I also added a "Location" configuration name/value pair for each website so I could put the location at the bottom of the site to confirm when global load balancing is working just by pulling it out like this:

location = ConfigurationManager.AppSettings["Location"];

I could also potentially glean my location by exploring the Environment variables like WEBSITE_SITE_NAME for my application name, which I made match my site's location.

Now I bring these all together by setting up a Traffic Manager in Azure.

Traffic Manager

I change my DNS CNAME to point to the Traffic Manager, NOT the original website. Then I make sure the traffic manager knows about each of the Azure Website endpoints.

Then I make sure that my main CNAME is setup in my Azure Website, along with the Traffic Manager domain. Here's my DNSimple record:

image

And here's my Azure website configuration:

Azure Website Configuration

Important Note: You may be thinking, hang on, I though there was already load balancing built in to Azure Websites? It's important to remember that there's the load balancing that selects which data center, and there's the load balancing that selects an actual web server within a data center. 
Also, you can choose between straight round-robin, failover (sites between datacenters), or Performance, when you have sites in geographic locations and you want the "closest" one to the user. That's what I chose. It's all automatic, which is nice.

Azure Traffic Manager

Since the Traffic Manager is just going to resolve to a specific endpoint and all my endpoints already have a wildcard SSL, it all literally just works.

When I run NSLOOKUP myHub I get something like this:

>nslookup hub.mystartup.com
Server: ROUTER
Address: 10.71.1.1

Non-authoritative answer:
Name: ssl.mystartup-northcentralus.azurewebsites.net
Address: 23.96.211.345
Aliases: hub.mystartup.com
mystartup.trafficmanager.net
mystartup-northcentralus.azurewebsites.net

As I'm in Oregon, I get the closest data center. I asked friends via Skype in Australia, Germany, and Ireland to test and they each got one of the other data centers.

I can test for myself by using https://www.whatsmydns.net and seeing the different IPs from different locations.

Global DNS

This whole operation took about 45 minutes, and about 15 minutes of that was waiting for DNS to propagate.

In less than an hour went from a small prototype in a data center in Chicago and then scaled it out to datacenters globally and added SSL.

Magical power.

Related Links


Sponsor: Big thanks to Aspose for sponsoring the blog feed this week. Aspose.Total for .NET has all the APIs you need to create, manipulate and convert Microsoft Office documents and a host of other file formats in your applications. Curious? Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Building Modern Web Apps with ASP.NET - A new day of free ASP.NET Training for 2014

February 5, '14 Comments [30] Posted in ASP.NET | Azure
Sponsored By
Scott Hunter and Scott Hanselman talking about What's New in VS2013

Last year, about this time, a bunch of us sat down in a studio to give a full day of tutorials and discussion on "Building Web Apps with ASP.NET." All those videos are online and have lots of good content like:

We headed over to the Microsoft Virtual Academy Studios again just this last week for another full day of discussion, training, as well as a glimpse into the possible future of .NET. Between these two days of videos you'll get a real sense of what's possible and real advice on how to build your next web application.

Today we've got 7 all-new segments for you, each recorded live at the MS Studios.

These videos are featuring folks like Scott Hunter, Levi Broderick, Rowan Miller, Pranav Rastogi, Mads Kristensen, and Louis DeJardin. No marketing folks, just actual developers that work on ASP.NET every day.

ScottHu and ScottHa talking about VS20131: What's New in Visual Studio 2013 for Web Developers - Learn about the latest features in Visual Studio 2013, including dozens of tips and tricks.

image2: Upgrading Applications - Get a deep dive on how to upgrade your older applications to ASP.NET 4.5 and later.

image3: ASP.NET Identity - Explore the new ASP.NET Identity system. Learn how to migrate your existing membership data to the new Identity system and how to integrate with other membership systems.

image4: Web Essentials and the Client Side - Discover how to build modern client-side applications, more simply and quickly, with a host of new features, tips, and tricks in Web Essentials for Visual Studio.

image5: Entity Framework - Have you been using Entity Framework for data access in your web app? In this advanced demo-heavy session, learn the latest features of Entity Framework 6 and get sneak previews of what's coming in version 6.1.

image6: The "Katana" Project - Hear the latest on "Project Katana," the Microsoft implementation of Open Web Interface for .NET. It's a glimpse of the future for cloud-optimizing your ASP.NET applications.

image7: ASP.NET "Project Helios" - Discover "Project Helios," a prototype representing the re-thinking of the core of ASP.NET. Take a look at the future of web development, with a modular, lightweight OWIN host that runs on Internet Information Services (IIS).

Also be sure to explore the new series "Get Started with Windows Azure today" featuring content from ScottGu himself for a full 90 minutes!

image

I hope you have as much fun watching them as we did filming them.


Sponsor: Big Thanks to Aspose for sponsoring the blog this week! Aspose.Total for .NET has all the APIs you need to create, manipulate and convert Microsoft Office documents and a host of other file formats in your applications. Curious? Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Deploying TWO websites to Windows Azure from one Git Repository

February 4, '14 Comments [13] Posted in Azure
Sponsored By

Deploying to Windows Azure is very easy from Git. I use the Azure Cross-Platform Command Line (open source on github, written in node) that I get from npm via "npm install azure-cli --g" to make the sites.

When you make a new site with the Azure command line, you'll usually do this:

azure site create --location "West US" MyFirstSite --git

And the tool will not only make the site, but also add a git remote for you, something like https://username@MyFirstSite.scm.azurewebsites.net:443/MyFirstSite.git. When you push to that git remote, Azure deploys the site. You can git push sites with PHP, node, ASP.NET, and Python.

Two Deployments in Azure using Git

You may have multiple remotes with your git repository, of course:

C:\MyCode>git remote show
azure
origin

When you push a folder with code to Azure via Git, unless you're pushing binaries, Azure is going to compile the whole thing for you when it gets pushed. It will restore npm modules or restore NuGet packages, and then build and deploy your app.

If your repository has a lot of .NET projects, you usually only want one project to be the actual deployed website, so you can add a .deployment file to specify which project contains website you're git deploying:

[config]
project = WebProject/MyFirstSiteWebProject.csproj

However, in lieu of a .deployment file, you can also set an application configuration setting with the Azure Portal to to the same thing.

Setting Kudu Projects with Config options

Or, of course, set the configuration values for each site using the Azure Command Line:

c:\MyCode>azure site config add Project=WebProject/MyFirstSiteWebProject.csproj [sitename]

What's nice about setting the "Project" setting via site configuration rather than via a .deployment file is that you can now push the same git repository containing two different web sites to two remote Azure web sites. Each Azure website should have a different project setting and will end up deploying the two different sites.

Git Deploying from one Repo to two separate Azure Web Sites

I do this by setting the git remotes manually like this, using the correct git remote URLs I get from the Azure Portal:

C:\MyCode> git remote add azureweb1 https://scott@website1.scm.azurewebsites.net:443/website1.git
C:\MyCode> git remote add azureweb2 https://scott@website2.scm.azurewebsites.net:443/website2.git
C:\MyCode> git remote show
azureweb1
azureweb2
origin
C:\MyCode> git push azureweb1 master

I have a number of solutions with two or more web sites, or in one case, a web site I want separate from my web api, and I deploy them just like this.

Hope this helps!


Sponsor: Big Thanks to Aspose for sponsoring the blog this week! Aspose.Total for .NET has all the APIs you need to create, manipulate and convert Microsoft Office documents and a host of other file formats in your applications. Curious? Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Introducing Windows Azure WebJobs

January 23, '14 Comments [42] Posted in Azure
Sponsored By

I'm currently running 16 web sites on Windows Azure. I have a few Virtual Machines, but I prefer to run things using "platform as a service" where I don't have to sweat the underlying Virtual Machine. That means, while I know I can run a Virtual Machine and put "cron" jobs on it, I'm less likely to because I don't want to mess with VMs or Worker Roles.

There are a few ways to run stuff on Azure, first, there's IAAS (Infrastructure as a Service) which is VMs. Then there's Cloud Applications (Cloud Services) where you can run anything in an Azure-managed VM. It's still a VM, but you have a lot of choice and can run Worker Roles and background stuff. However, there's a lot of ceremony if you just want to run your small "job" either on a regular basis or via a trigger.

Azure Explained in one Image

Looking at this differently, platform as a service is like having your hotel room fixed up daily, while VMs is more like managing a house yourself.

Azure Explained in one Image

 

As someone who likes to torch a hotel room as much as the next person, this is why I like Azure Web Sites (PAAS). You just deploy, and it's done. The VM is invisible and the site is always up.

However, there's not yet been a good solution under web sites for doing regular jobs and batch work in the background. Now Azure Web Sites support a thing  called "Azure WebJobs" to solve this problem simply.

Scaling a Command Line application with Azure WebJobs

When I want to do something simple - like resize some images - I'll either write a script or a small .NET application. Things do get complex though when you want to take something simple and do it n times. Scaling a command line app to the cloud often involves a lot of yak shaving.

Let's say I want to take this function that works fine at the command line and run it in the cloud at scale.

public static void SquishNewlyUploadedPNGs(Stream input, Stream output)
{
var quantizer = new WuQuantizer();
using (var bitmap = new Bitmap(input))
{
using (var quantized = quantizer.QuantizeImage(bitmap))
{
quantized.Save(output, ImageFormat.Png);
}
}
}

WebJobs aims to make developing, running, and scaling this easier. They are built into Azure Websites and run in the same VM as your Web Sites.

Here's some typical scenarios that would be great for the Windows Azure WebJobs SDK:

  • Image processing or other CPU-intensive work.
  • Queue processing.
  • RSS aggregation.
  • File maintenance, such as aggregating or cleaning up log files. 
  • Other long-running tasks that you want to run in a background thread, such as sending emails.

WebJobs are invoked in two different ways, either they are triggered or they are continuously running. Triggered jobs happen on a schedule or when some event happens and Continuous jobs basically run a while loop.

WebJobs are deployed by copying them to the right place in the file-system (or using a designated API which will do the same). The following file types are accepted as runnable scripts that can be used as a job:

  • .exe - .NET assemblies compiled with the WebJobs SDK
  • .cmd, .bat, .exe (using windows cmd)
  • .sh (using bash)
  • .php (using php)
  • .py (using python)
  • .js (using node)

After you deploy your WebJobs from the portal, you can start and stop jobs, delete them, upload jobs as ZIP files, etc. You've got full control.

A good thing to point out, though, is that Azure WebJobs are more than just scheduled scripts, you can also create WebJobs as .NET projects written in C# or whatever.

Making a WebJob out of a command line app with the Windows Azure WebJobs SDK

WebJobs can effectively take some command line C# application with a function and turn it into a scalable WebJob. I spoke about this over the last few years in presentations when it was codenamed "SimpleBatch." This lets you write a simple console app to, say, resize an image, then move it up to the cloud and resize millions. Jobs can be triggered by the appearance of new items on an Azure Queue, or by new binary Blobs showing up in Azure Storage.

NOTE: You don't have to use the WebJobs SDK with the WebJobs feature of Windows Azure Web Sites. As noted earlier, the WebJobs feature enables you to upload and run any executable or script, whether or not it uses the WebJobs SDK framework.

I wanted to make a Web Job that would losslessly squish PNGs as I upload them to Azure storage. When new PNGs show up, the job should automatically run on these new PNGs. This is easy as a Command Line app using the nQuant open source library as in the code above.

Now I'll add the WebJobs SDK NuGet package (it's prerelease) and Microsoft.WindowsAzure.Jobs namespace, then add [BlobInput] and [BlobOutput] attributes, then start the JobHost() from Main. That's it.

using Microsoft.WindowsAzure.Jobs;
using nQuant;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
JobHost host = new JobHost();
host.RunAndBlock();
}

public static void SquishNewlyUploadedPNGs(
[BlobInput("input/{name}")] Stream input,
[BlobOutput("output/{name}")] Stream output)
{
var quantizer = new WuQuantizer();
using (var bitmap = new Bitmap(input))
{
using (var quantized = quantizer.QuantizeImage(bitmap))
{
quantized.Save(output, ImageFormat.Png);
}
}

}
}
}

CONTEXT: Let's just step back and process this for a second. All I had to do was spin up the JobHost and set up a few attributes. Minimal ceremony for maximum results. My console app is now processing information from Azure blob storage without ever referencing the Azure Blob Storage API!

The function is automatically called when a new blob (in my case, a new PNG) shows up in the input container in storage and the Stream parameters are automatically
"bound" (like Model Binding) for me by the WebJobs SDK.

To deploy, I zip up my app and upload it from the WebJobs section of my existing Azure Website in the Portal.

image

Here it is in the Portal.

image

I'm setting mine to continuous, but it can also run on a detailed schedule:

12schdmonthsonpartweekdaysoccurences

I need my WebJob to be told about what Azure Storage account it's going to use, so from my Azure Web Site under the Connection Strings section I set up two strings, one for the AzureJobsRuntime (for logging) and one for AzureJobsData (what I'm accessing). 

image

For what I'm doing they are the same. The connection strings look like this:

DefaultEndpointsProtocol=https;AccountName=hanselstorage;AccountKey=3exLzmagickey

The key here came from Manage Access Keys in my storage account, here:

image

In my "Hanselstorage" Storage Container I made two areas, input and output. You can name yours whatever. You can also process from Queues, etc.

image

Now, going back to the code, look at the parameters to the Attributes I'm using:

public static void SquishNewlyUploadedPNGs(           
[BlobInput("input/{name}")] Stream input,
[BlobOutput("output/{name}")] Stream output)

There's the strings "input" and "output" pointing to specific containers in my Storage account. Again, the actual storage account (Hanselstorage) is part of the connection string. That lets you reuse WebJobs in multiple sites, just by changing the connection strings.

There is a link to get to the Azure Web Jobs Dashboard to the right of your job, but the format for the URL to access is this: https://YOURSITE.scm.azurewebsites.net/azurejobs. You'll need to enter your same credentials you've used for Azure deployment.

Once you've uploaded your job, you'll see the registered function(s) here:

image

I've installed the Azure SDK and can access my storage live within Visual Studio. You can also try 3rd party apps like Cloudberry Explorer. Here I've uploaded a file called scottha.png into the input container.

image

After a few minutes the SDK will process the new blob (Queues are faster, but blobs are checked every 10 minutes), the job will run and either succeed or fail. If your app throws an exception you'll actually see it in the Invocation Details part of the site.

image

Here's a successful one. You can see it worked (it squished) because of the number of input bytes and the number of output bytes.

image

You can see the full output of what happens in a WebJob within this Dashboard, or check the log files directly via FTP. For me, I can explore my output container in Azure Storage and download or use the now-squished images. Again, this can be used for any large job whether it be processing images, OCR, log file analysis, SQL server cleanup, whatever you can think of.

Azure WebJobs is in preview, so there will be bugs, changing documentation and updates to the SDK but the general idea is there and it's solid. I think you'll dig it.

Related Links


Sponsor: Big thanks to combit for sponsoring the blog feed this week! Enjoy feature-rich report designing: Discover the reporting tool of choice for thousands of developers. List & Label is an award-winning component with a royalty-free report designer. Free trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web
Page 1 of 6 in the Azure category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.