Scott Hanselman

CDNs fail, but your scripts don't have to - fallback from CDN to local jQuery

April 30, 2013 Comment on this post [46] Posted in ASP.NET | Javascript
Sponsored By

CDN issues in the Northeast

There's a great website called http://whoownsmyavailability.com that serves as a reminder to me (and all of us) that external dependencies are, in fact, external. As such, they are calculated risks with tradeoffs. CDNs are great, but for those minutes or hours that they go down a year, they can be super annoying.

I saw a tweet today declaring that the ASP.NET Content Delivery Network was down. I don't work for the CDN team but I care about this stuff (too much, according to my last performance review) so I turned twitter to figure this out and help diagnose it. The CDN didn't look down from my vantage point.

I searched for things like "ajax cdn,"microsoft cdn," and "asp.net cdn down" and looked at the locations reported by the Twitter users in their profiles. They had locations like CT, VT, DE, NY, ME. These are all abbreviations for states in the northeast of the US. There were also a few tweets from Toronto and Montreal. Then, there was one random tweet from a guy in Los Angeles on the other side of the country. LA doesn't match the pattern that was developing.

I tweeted LA guy and asked him if he was really in LA or rather on the east coast.

Bingo. He was VPN'ed into Massachusetts (MA). I had a few folks send me tracerts and sent them off to the CDN team who fixed the issue in a few minutes. There was apparently a bad machine in Boston/NYC area that had a configuration change specific to the a certain Ajax path that had gone undetected by their dashboard (this has been fixed and only affected the Ajax part of the CDN in this local area).

More importantly, how can we as application developers fallback gracefully when an external dependency like a CDN goes down? Just last week I moved all of my Hanselminutes Podcast images over to a CDN. If there was a major issue I could fall back to local images with a code change. However, if this was a mission critical site, I should not only have a simple configuration switch to fallback to local resources, but I should also test and simulate a CDN going down so I'm prepared when it inevitably happens.

With JavaScript we can detect when our CDN-hosted JavaScript resources like jQuery or jQuery UI aren't loaded successfully and try again to load them from local locations.

Falling back from CDN to local copies of jQuery and JavaScript

The basic idea for CDN fallback is to check for a type or variable that should be present after a script load, and if it's not there, try getting that script locally. Note the important escape characters within the document.write. Here's jQuery:

<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>
if (typeof jQuery == 'undefined') {
document.write(unescape("%3Cscript src='/js/jquery-2.0.0.min.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>

Or, slightly differently. This example uses protocol-less URLS, checks a different way and escapes the document.write differently.

<script src="//ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>window.jQuery || document.write('<script src="js/jquery-2.0.0.min.js">\x3C/script>')</script>

If you are loading other plugins you'll want to check for other things like the presence of specific functions added by your 3rd party library, as in "if (type of $.foo)" for jQuery plugins.

Some folks use a JavaScript loader like yepnope. In this example you check for jQuery as the complete (loading) event fires:

yepnope([{
load: 'http://ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js',
complete: function () {
if (!window.jQuery) {
yepnope('js/jquery-2.0.0.min.js');
}
}
}]);

Even better, RequireJS has a really cool shorthand for fallback URLs which makes me smile:

requirejs.config({
enforceDefine: true,
paths: {
jquery: [
'//ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min',
//If the CDN location fails, load from this location
'js/jquery-2.0.0.min'
]
}
});

//Later
require(['jquery'], function ($) {
});

With RequireJS you can then setup dependencies between modules as well and it will take care of the details. Also check out this video on Using Require.JS in an ASP.NET MVC application with Jonathan Creamer.

Updated ASP.NET Web Forms 4.5 falls back from CDN automatically

For ASP.NET Web Forms developers, I'll bet you didn't know this little gem. Here's another good reason to move your ASP.NET sites to ASP.NET 4.5 - using a CDN and falling back to local files is built into the framework.

(We've got this for ASP.NET MVC also, keep reading!)

Fire up Visual Studio 2012 and make a new ASP.NET 4.5 Web Forms application.

When using a ScriptManager control in Web Forms, you can set EnableCdn="true" and ASP.NET will automatically change the <script> tags from using local scripts to using CDN-served scripts with local fallback checks included. Therefore, this ASP.NET WebForms ScriptManager:

<asp:ScriptManager runat="server" EnableCdn="true">
<Scripts>
<asp:ScriptReference Name="jquery" />
<asp:ScriptReference Name="jquery.ui.combined" />
</Scripts>
</asp:ScriptManager>

...will output script tags that automatically use the CDN and automatically includes local fallback.

<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.8.2.js" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
(window.jQuery)||document.write('<script type="text/javascript" src="Scripts/jquery-1.8.2.js"><\/script>');//]]>
</script>

<script src="http://ajax.aspnetcdn.com/ajax/jquery.ui/1.8.24/jquery-ui.js" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
(!!window.jQuery.ui && !!window.jQuery.ui.version)||document.write('<script type="text/javascript" src="Scripts/jquery-ui-1.8.24.js"><\/script>');//]]>
</script>

What? You want to use your own CDN? or Googles? Sure, just make a ScriptResourceMapping and put in whatever you want. You can make new ones, replace old ones, put in your success expression (what you check to make sure it worked), as well as your debug path and minified path.

var mapping = ScriptManager.ScriptResourceMapping;
// Map jquery definition to the Google CDN
mapping.AddDefinition("jquery", new ScriptResourceDefinition
{
Path = "~/Scripts/jquery-2.0.0.min.js",
DebugPath = "~/Scripts/jquery-2.0.0.js",
CdnPath = "http://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js",
CdnDebugPath = "https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.js",
CdnSupportsSecureConnection = true,
LoadSuccessExpression = "window.jQuery"
});

// Map jquery ui definition to the Google CDN
mapping.AddDefinition("jquery.ui.combined", new ScriptResourceDefinition
{
Path = "~/Scripts/jquery-ui-1.10.2.min.js",
DebugPath = "~/Scripts/jquery-ui-1.10.2.js",
CdnPath = "http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.2/jquery-ui.min.js",
CdnDebugPath = "http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.2/jquery-ui.js",
CdnSupportsSecureConnection = true,
LoadSuccessExpression = "window.jQuery && window.jQuery.ui && window.jQuery.ui.version === '1.10.2'"
});

I just do this mapping once, and now any ScriptManager control application-wide gets the update and outputs the correct fallback.

<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.js" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
(window.jQuery)||document.write('<script type="text/javascript" src="Scripts/jquery-2.0.0.js"><\/script>');//]]>
</script>

<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.2/jquery-ui.js" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
(window.jQuery && window.jQuery.ui && window.jQuery.ui.version === '1.10.2')||document.write('<script type="text/javascript" src="Scripts/jquery-ui-1.10.2.js"><\/script>');//]]>
</script>

If you want to use jQuery 2.0.0 or a newer version than what came with ASP.NET 4.5, you'll want to update your NuGet packages for ScriptManager. These include the config info about the CDN locations. To update (or check your current version against the current) within Visual Studio go to Tools | Library Package Manager | Manage Libraries for Solution, and click on Updates on the left there.

image

Updated ASP.NET Web Optimization Framework includes CDN Fallback

If you're using ASP.NET MVC, you can update the included Microsoft.AspNet.Web.Optimization package to the -prerelease (as of these writing) to get CDN fallback as well.

Get Optimization Updates by "including PreRelease"

Note that I've on the Updates tab within the Manage NuGet Packages dialog but I've selected "Include Prerelease."

Now in my BundleConfig I can setup my bundles to include not only the CdnPath but also a CdnFallbackExpression:

public static void RegisterBundles(BundleCollection bundles)
{
bundles.UseCdn = true;
BundleTable.EnableOptimizations = true; //force optimization while debugging

var jquery = new ScriptBundle("~/bundles/jquery", "//ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js").Include(
"~/Scripts/jquery-{version}.js");
jquery.CdnFallbackExpression = "window.jQuery";
bundles.Add(jquery);
//...
}

Regardless of how you do it, remember when you setup Pingdom or other availability alerts that you should be testing your Content Delivery Network as well, from multiple locations. In this case, the CDN failure was extremely localized and relatively short but it could have been worse. A fallback technique like this would have allowed sites (like mine) to easily weather the storm.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

How to enable Google Now for iOS devices (iPhone, iPad) with Google Apps for Business Accounts

April 30, 2013 Comment on this post [23] Posted in Musings
Sponsored By

Google Now for iOS

I've got GAFYD (Google Apps for your Domain) running mail for hanselman.com and managing Google logins for the family. (There's 14 of us.)

Google Now (that's the fancy cards and predictive assistant) for iOS was just released (you can download Google Now here) and integrated into the Google Search app for iPhone and iPad.

If you install it and log in with your Google Apps account, you'll get an error that "your administrator hasn't enabled Google Now for your domain."

You'll need to (or your admin will need to) turn it on for your Google Apps Domain. It just takes a moment.

Note that you're changing this setting under Android but it affects iOS as well, which is why it's so unintuitive.

Google Apps for Business, Education, and Government: Google Now needs to be turned on by an administrator before it can be used.

If you are an administrator, you can enable Google Now for users in your organization by following these steps:

  1. Sign in to your Google Apps control panel.
  2. Go to Settings > Mobile > Org Settings > Android settings.
  3. Click the checkbox next to Enable Google Now to turn on Google Now.
  4. Click Save.

Here's a screenshot describing the flow, as it's not obvious.

Click Settings, Mobile, Enable Google Now

It took about 10 minutes for the setting to propagate. You may also need to force-quit the Google app for it to pick up the new setting. Your mileage may vary, but this IS how you enable Google Now, regardless of device.

I hope this saves you time and frustration. Pass it on.

UPDATE: If you've updated to the new console, the location in the new console is: https://admin.google.com/AdminHome#ServiceSettings/notab=1&service=mobile&subtab=org

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Review: The Lenovo X1 Carbon Touch is my new laptop

April 27, 2013 Comment on this post [108] Posted in Hardware | Reviews
Sponsored By

 

I have a new primary laptop and it's the Lenovo X1 Carbon Touch. It's an Intel Core i7 3667U running at 2 GHz. I got 8 GB of RAM and a 240 GB SSD. The integrated graphics are the Intel HD Graphics 4000 running a 14 Inch screen. It also has Bluetooth 4.0 (nice!) as well as Intel a/b/g/n WiFi.

The X1 Carbon Touch is super thin

Feel

First, it feels pro. It feels like a Lenovo, and I've always been a fan. You either love them or not. I do. Since my first T60p they've never done me wrong, and this one is no different. If you like Lenovo, you'll like this machine. If you're a discriminating business user who wants power and portability, you'll appreciate this Ultrabook.

The Lenovo X1 Carbon Touch Keyboard

The keyboard initially looks weird and a little "chiclet-y" and I assumed it would be uncomfortable to use and very much unlike the Lenovo keyboards of legend. You're likely familiar with the classic look and feel of ThinkPad keyboards. Once you're competent on a ThinkPad keyboard you expect to be good on any of them.

While it's different, with its ever-so-slightly concave "smile" keys, they have the same travel and quality feel of any Lenovo. I have had no trouble getting used to the keyboard. I'd say now after some weeks I prefer this keyboard to the previous version.

Ultrabook Size and Weight

Stacked from thin to not: Lenovo X1 Carbon Touch, MacBook Pro, Lenovo W520

That's the X1 Carbon Touch on the top there, then a 2011 MacBook Pro, and finally a Lenovo W520 on the bottom.

The W520 is 1.5" thick and weighs 5.75 lbs with the 9 cell battery. While the 1080p screen was nice, carrying this beast all over the world DID tire me out. Add a few more pounds for an AC adapter that weight a pound itself, your phone and accessories, and you had a 10lb backpack pretty quickly.

The MacBook Pro weighs 5.6 lbs with a native 1400x900 screen. I tried using this as my primary for a few months and while the hardware build quality is top-notch, I found myself pawing at the screen unconsciously. More on this later, but once you really add touch as a complementary input option, you'd be surprised how often your brain assumes every machine has touch.

The X1 Carbon is super thin (slightly less than 3/4 of an inch), and light enough (just 3.4 lbs) to hold comfortably with one hand and faster than the W520. Sold. The major trade-off was 1600x900 resolution (rather than a full 1080p) and the lack of a third USB port, but its light weight is a daily joy. It's not quite half, but FEELS half as light as the W520 and MacBook Pro. It's only a 13.5" screen, but I have quickly adapted to it. Plus, I can run a large monitor (or they say, 3 with the USB Dock when it shows up) without trouble.

Out and About

Seriously, all laptops should be this thin and light. There's just no reason anymore for a 6 to 10 lb laptop and I said as much in my post "My next PC will be an Ultrabook."

I can truly see why MacBook Air folks are so enthusiastic. All Ultrabooks have an "Air" about them. When you can throw your 3lb Ultrabook in a Messenger Bag and it's no heavier than a few magazines, you're much more likely to carry it around. Add in 6(ish) hours of battery and you can comfortably move around before you have to plug in. Even better, somehow this thing charges FAST. Just 30 minutes of charging has topped me up 50-70%. I had a 20% low battery after a flight, plugged in while eating at the airport for a half hour, then ran to the next flight and I was more than 70% and able to work the next flight too. I'm getting >4 hours working hard, and have gotten as much as 6 with low brightness and just browsing or watching movies.

One of the USB ports will provide power to one device so you can charge your phone while the laptop is off. I love laptops with this feature. It saved me just last week while travelling. You can also charge the laptop with a phone connected so everyone gets charged.

The X1 also has a SIM slot for a 3G connection, although I've never met someone who used this. It worked fine with my AT&T 3G SIM but considering that I can tether from almost any device including my phone, plus the wide availability of sharing devices using 4G or LTE, this is a slot on this laptop you'll never fill.

Touch

Let's get real about touch a minute. Here's what I said before:

Don't knock a touchscreen until you've used one. Every laptop should (and will) have a touch screen in a year. Mark my words. This nonsense about how your arm will hurt assumes that you're only using it. A touchscreen is complementary not primary. I use it for pinching, for scrolling web pages, and for launching apps. It's much faster to just touch the icon than to mouse over to click one.

This X1 Carbon isn't a tablet, nor is it trying to be a tablet. It's a fantastic fast and light Ultrabook with a touch screen. Say what you will about Windows 8 and it's fullscreen interface, but I maintain that the addition of a touchscreen is as significant as the addition of a mouse. Similarly, when voice input is 100% reliable, adding voice will be equally as significant.

Three great input methods are better than two. I move from keyboard to mouse to touch smoothly.

Type type, mouse, swipe, type type, touch, click.

Sorting slides, moving files, swiping to the previous app, but most of all, scrolling around. Sometimes I use the two-finger scroll down gesture via the touchpad to scroll but often I hold my right hand around the screen and scroll down with my thumb. Often I'll pinch to zoom. It's extremely comfortable.

Reviewers and journalists need to understand that these computers aren't made for them. They are made for my kids and the touch generation. Touch screen MacBooks are inevitable. It will happen. Touchless is next after that.

If you do mobile device development, running these emulators with a touch laptop is a joy. Let me rephrase. Get a freaking touch screen, mobile developers. Touch on your laptop will make you happy every single day.

Learn to integrate touch into your existing keyboard and mousing style and you'll be faster and more effective than ever. If you use just one input method, you are missing out.

Dongles Galore

I also bought the requisite dongles including a Mini DisplayPort to VGA Adapter and Mini DisplayPort DVI. If you like wired network access, you'll also need the USB 2.0 Ethernet Adapter. Other than having to carry them around in my bag, dongle life is what it is. I'd rather have a slim laptop on a few adapters than continue to carry the Lenovo W520 I've been carrying.

I have ordered the Lenovo Think Pad USB 3.0 Docking Station but it hasn't arrived yet. I will update this review once it arrives. The docking station adds 5 USB 2.0 ports and an additional USB 3.0 port. It also includes Gigabit Ethernet.

This Docking Station also includes two (2) DVI ports which brings the number of monitors this laptop can run up to four. Well, three external (two DVI, one DisplayPort) and the built in LCD. It runs my 24" LCD over DVI today famously and without any trouble at all. I've also presented with this laptop using the VGA adapter and had exactly ZERO problems. The Display Drivers and adapters are rock solid.

Screen

There's been a lot of discussion about the screen on the X1 Carbon Touch. There's a protective film later over the screen and it really bothers some people. Some folks have successfully pried it off with some patience. Honestly, I noticed it for a day and then I stopped caring. I've spoken to folks who have said it was irritating enough that they sent the laptop back. Others just don't care. It's a clear, clean, bright screen and I'm happy with it.

X1 Carbon Touch Screen

X1 Carbon Touch Screen

It's not retina, but it's a great clear screen with great brightness and excellent horizontal viewing angles. It's a solid 14". I am surprised at the size of the W520 now that I've adapted to the X1.

Phrasing it differently, the X1 is a great mobile workstation. The W520 is a great workstation that can be moved occasionally.

The Good

It's really fast. I got the i7 processor version and it's fast. The 240gig SSD is lovely and devoid of hiccups. Visual Studio starts in 5 seconds cold, and 2 seconds warm. It runs Hyper-V nicely, and I've also run the x86 Android Emulator full speed as well as the Windows Phone emulator.

If you look at the WEI (Windows Experience Index) you'll be disappointed by the 5.5 Desktop Graphics performance, but I'm starting to think that this score should be thrown out. 2D graphics performance, while measureable, just isn't easily noticeable in day to day business use. We care about scrolling around in large documents, Excel, big PDFs, long web pages. The Intel integrated video in the X1 Carbon Touch is more than adequate. It's even pretty good in 3D games, handling games like TorchLight II very nicely if you turn antialiasing down just a smidge.

WEI for the Lenovo X1 Carbon Touch - 7.1, 7.4, 5.5, 6.4, 8.1

Tiny Happy Features

There some other nice features that are small but important that make this a great business machine.

  • A decent 720p HD integrated Webcam. I've used it with Skype and Lync and it works great. I wish it was angled slightly higher but that's a nit.
  • It has not only hardware volume buttons (which we expect) but also a hardware microphone mute button with an LED indicator which is great for long conference calls.
  • It boots up fast and sleeps very reliably. It reboots only when a Windows update requires it. I can close it and put it in my bag without concern.
  • A hardware Airplane Mode switch. This is not just a "turn off devices hard" button but it's integrated with Windows 8 and turns off all radios with a hardware switch. Also nice for saving batteries.
  • Good quality mics. I don't like doing conference calls or video conferencing with just a laptop's microphone but this one is better than usual.
  • Integrated TPM (Trusted Platform Module) so I can BitLocker my C: drive easily, and I have. I also get DirectAccess and a virtual Smart Card so I don't need to use VPN and am always logged into work. Super convenient.
  • Integrated Fingerprint login. I used to use this all the time on my W520 but for some reason I've been using the Virtual Smart Card lately. Still it's a nice login feature and I've had good experiences with it before.
  • One combination headphone/mic plug. Most good laptops have this now. You can use a good pair of headphones (or your iPhone headphones) and get mic and headphones in one. This detection is integfrated with the audio system.
  • Integrated SD card slot.
  • It's SO quiet. I sometimes wish it wasn't silent so I could know what it was doing.

And finally, one piece of software that came with it that I thought would suck but didn't - the Dolby Home Theater software. It actually has some nice presets for movies, VOIP, and music that definitely improve the output (or perception of output) of the speakers.

The Bad

The touchpad is the worse part of this device. Initially I hated it. They've removed the small textured touchpad I love from the W520 with it's buttons on the bottom, and replaced it with a new glossy glass touchpad. It's the lack of buttons on the bottom that's killing me. I keep bumping the touchpad while I'm using it and the cursor jumps.

It took me a few days to realize why this was happening, then I realized that I historically cursor with my index finger and rest my thumb on the bottom of the touchpad. With other ThinkPads there are buttons at the bottom that my thumb rested on. With the X1, I was resting on the touchpad itself. This just took a week of conscious thought and it's cool now, but be prepared for that "changeover" time as you teach yourself where to place your fingers while mousing. I'm interested in other X1 Carbon owners' thoughts on the touchpad in the comments.

The Carbon Touch has a much larger touchpad than the W520

I had to fiddle with the touch settings a little as well, as I move fast. I recommend power users turn down the duration you need to press and hold in order to activate a Right Click action. I also turn on the "Touch Feedback" so you can actually see the results of your touch. It's meant for presenters, but it's really nice to get the visual feedback that the system has recorded your touch.

Modify the Touch Settings to optimize your X1 Carbon Touch

The X1 Carbon Touch can also get a little hot. You'll only notice this if you are really a LAP-top person (and I'm not) but even now as I write this I'm running two instances of VS, PhotoShop and a Virtual Machine in Hyper-V doing Windows Update within a Windows 7 VM. It's not going to burn me, but it is definitely hot.

Finally, I did have one day with a really lousy Wi-Fi driver while I was travelling. The MGM Grand Hotel in Las Vegas had a wireless network that this Intel Wi-Fi card just hated. I was getting lockups and it was generally bad. However, I switched to using a 4G hotspot and updated the driver and never saw the issue again. Moral - Make sure you're using tested and reliable "out of the box" drivers. I am sticking with the drivers from Windows Update for important things and Lenovo System Update for non-essential drivers. I'm also finding the SD card (Ricoh) driver to be a little suspicious so I'm keeping it disabled in Device Manager when I'm not using it.

I recommend you uninstall ALL random software (there's not too much) that Lenovo puts on it except the Lenovo System Update. I use this for only for drivers and small utilities that give you things like on-screen caps lock notifiers.

The only other thing I really wish this laptop had was an extra USB port. There's one USB 3.0 and one USB 2 port and I really needed a third USB port recently while presenting. I used the USB to Ethernet adapter along with my USB Arc Touch Mouse and was stuck. I needed a third post for a the presenter remote. This is a small irritant, but I noticed it.

Conclusion

This is a very solid touch Ultrabook that I'm currently using as my main machine. The Lenovo X1 Carbon Touch has replaced my Intel Ultrabook which has been passed on to my wife. My Lenovo W520 is currently my emergency backup machine and is weighing down my bookshelf. I'm taking this device everywhere I go and when I'm not at home it's my primary development machine.


Sponsor: The Windows Azure Developer Challenge is on.  Complete 5 programming challenges for a chance at spot prizes, Stage prizes and the Grand Prize. Over $16,000 is up for grabs with 65 chances to win!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Penny Pinching in the Cloud: How to run a two day Virtual Conference for $10

April 26, 2013 Comment on this post [28] Posted in Azure | Open Source | SignalR
Sponsored By
DotNetConf Logo

We've just finished Day One of "DotNetConf" our community-run free online conference for developers who love the .NET development platform and open source!

UPDATE: All the Videos from both days of DotNetConf are now online and available for on-demand viewing!

The Conference Platform

It seems funny to call the software our conference runs on a "platform" as that sounds very "enterprisey" and official. In the past we've done aspConf and mvcConf with sponsors who helped pay for things. We used Channel 9 and had a studio and streamed either from Seattle or using Live Meeting.

However, this year we wanted to do it on the cheap and more distributed. We wanted speakers from ALL over in all time zones. How cheap? About USD$10ish we figure. I'll get a complete bill later, but we basically wanted to scale up, do the talks and scale down.

Video Broadcasting and Screen-sharing

  • This year we are using Google Hangouts with their "Hangouts On Air" feature. A "dotnetconf" Google Account invites the presenter to a Hang Out and check the "on air" box before start the hangout. Then we use the Hangout Toolbox to dynamically add on screen graphics and speaker labels. Everyone sets their resolution to 1280x768 and the live stream ends scaling down to 480p.
  • Once you hit "Start Broadcast" you're given a YouTube link to the live stream. When you hit End Broadcast, the resulting video is ready to go on your YouTube page within minutes. The hangout owner (me or Javier) then clicks "Hide in Broadcast" and fades away. You can see I'm faded away in the below screenshot. I'm there, but only if I need to be. When there's only one active presenter the Hangout turns into a full screen affair, which is what we want.
  • Important Note: Rather than an 8 hour Hangout, we started and stopped as each speaker did their talk. This means that our talks are already discrete on the YouTube page. The YouTube videos can have their start and end times trimmed so the start isn't so rough.

Google Hangouts On Air

The Database

Surprise! There is no database. There is no need for one. We're running a two page site using ASP.NET Web Pages written in WebMatrix. It runs in the Azure cloud but since our dataset (speakers, schedule, the video stream location, etc) isn't changing a lot, we put all the data in XML files. It's data, sure, but it's a poor man's database. Why pay for more than we need?

How do we update the "database" during the talk? Get ready to have an opinion. The data is in Dropbox. (Yes, it could have been SkyDrive, or another URL, but we used DropBox)

Our Web App pulls the data from Dropbox URLs and caches it. Works pretty nice.

<appSettings>
<add key="url.playerUrl" value="https://dl.dropboxusercontent.com/s/fancypantsguid/VideoStreams.xml" />
<add key="url.scheduleUrl" value="https://dl.dropboxusercontent.com/s/fancypantsguid/Schedule.xml" />
<add key="url.speakerUrl" value="https://dl.dropboxusercontent.com/s/fancypantsguid/Speakers.xml" />
<add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://[your namespace].servicebus.windows.net;SharedSecretIssuer=owner;SharedSecretValue=[your secret]" />
</appSettings>

The code is simple, as code should be. Wanna show the schedule? And yes , it's a TABLE. It's a table of the schedule. Nyah.

@foreach(var session in schedule) {
var confTime = session.Time;
var pstZone = TimeZoneInfo.FindSystemTimeZoneById("Pacific Standard Time");
var attendeeTime = TimeZoneInfo.ConvertTimeToUtc(confTime, pstZone);
<tr>
<td>
<p>@confTime.ToShortTimeString() (PDT)</p>
<p>@attendeeTime.ToShortTimeString() (GMT)</p>
</td>
<td>
<div class="speaker-info">
<h4>@session.Title</h4>
<br>
<span class="company-name"><a class="speaker-website" href="/speakers.cshtml?speaker=@session.Twitter">@session.Name</a></span>
<br>
<p>@session.Abstract</p>
</div>
</td>
</tr>
}

Scaling Out

Scaling DotNetConfWe've been on an extra small Azure Website and then switched to two large (and finally, two medium as large was totally overkill) web sites. 

We scale up (and hence, pay) only during the conference and turn it down to Small when we're done. No need to spend money if we don't need to.

Scaling DotNetConf to Large

Updating the Site in Real-time with SignalR

Because the YouTube link changes with each Hangout, we had the problem that attendees of the conference would have to hit refresh themselves to get the new URL. There's a number of solutions to this that I'm sure you're already thinking about. We could meta refresh, refresh on a timer, but these aren't on demand. We also wanted to show a few videos during the downtime. One of us preps the next speaker while the other queues up videos to watch.

We realized this was a problem at about 10pm PST last night. Javier and I got on Skype and came up with this late night hack.

What if everyone had SignalR running while their were watching the videos? Then we could push out the next YouTube video from an admin console.

So visualize this. There's the watcher (you), there's a admin (me) and there's the server (the Hub).

The watcher has this on their main page after including the /signalr/hub JavaScript:

$(function () {
var youtube = $.connection.youTubeHub;
$.connection.hub.logging = true;

youtube.client.updateYouTube = function (message, password) {
$("#youtube").attr("src", "http://www.youtube.com/embed/" + message + "?autoplay=1");
};
$.connection.hub.start();

$.connection.hub.disconnected(function () {
setTimeout(function () {
$.connection.hub.start();
}, 5000);
});
});

The watcher is listening, er, watching, for a SignalR message from the server with the YouTube video short code. When we get it, we swap out the iFrame. Simple and it works.

Here's the admin console where we put in the next YouTube code (I'm using Razor in ASP.NET Web Pages in WebMatrix, so this is mixed HTML/JS):

<div id="container">
<input type="text" id="videoId" name="videoId"><br/>
<input type="text" id="password" name="passsword" placeholder="password"><br/>
<button id="playerUpdate" name="playerUpdate">Update Player</button>
</div>

@section SignalR {
<script>
$(function () {
var youtube = $.connection.youTubeHub;
$.connection.hub.logging = true;

$.connection.hub.start().done(function () {
$('#playerUpdate').click(function () {
youtube.server.update($('#videoId').val(), $('#password').val());
});
});
$.connection.hub.disconnected(function() {
setTimeout(function() {
$.connection.hub.start();
}, 5000);
});
});
</script>
}

We put in the short code, the password and update. All this must be complex eh? What's the powerful SignalR backend up running in the cloud backed by the power of Azure and Service Bus look like? Surely that code must be too complex to show on a simple blog, eh? Strap in, friends.

public class YouTubeHub : Microsoft.AspNet.SignalR.Hub
{
public void update(string message, string password)
{
if (password.ToLowerInvariant() == "itisasecret")
{
Clients.All.updateYouTube(message);
ConfContext.SetPlayerUrl(message);
}
}
}

This is either a purist's nightmare or a pragmatists dream. Either way, we've been running it all day and it works. Between talks we pushed in pre-recorded talks and messages, then finally when the live talk started we pushed that one as well.

We also updated the DropBox links with the current Video Stream so that new visitors showing up would get the latest video, as new site visitors wouldn't have been connected when the video "push" message went out.

image

What about scale out? We sometimes have two machines in the farm so we need the SignalR "push updated youtube video message" to travel across a scale-out backplane. That took another 10 minutes.

Scaling out with SignalR using the Azure Service Bus

We used the SignalR 1.1 Beta plus Azure Service Bus Topics for scale out and added an Azure Service Bus to our account. Our app startup changed, adding this call to UseServiceBus():

string poo = "Endpoint=sb://dotnetconf-live-bus.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=g57totalsecrets=";   
GlobalHost.DependencyResolver.UseServiceBus(poo,"dotnetconf");
RouteTable.Routes.MapHubs();

Now SignalR uses the Service Bus Topics for "Pub/Sub" to pass notifications between the two web servers. I can push a new video from Web 1 and it is sent to everyone on Web 1 and Web 2 (or Web N) via SignalR's realtime persistent connection.

image

We'll delete this Service Bus Topic as soon as we are done. I would hate for the bill to get up into the nickels. ;) Here's example pricing from the Azure site:

432,000 Service Bus messages cost 432,000/10,000 * $0.01 = 44 * $0.01 = $0.44 per day.

I'm not sure how many messages we've done, but I can rest assured it won't cost a pile, so that's a relief.

Thank you to the Community!

  • Big thanks to designer Jin Yang who created the dotnetConf logo and design. Tweet @jzy and tell him you think he is special. Thanks to Dave Ward for converting Jzy's design into HTML!
  • Kudos and thanks to Javier Lozano for his coding, his organizing, his brainstorming and his tireless hard work. It was also cool for him to sit with me for hours last night while we hacked on SignalR and the DotNetConf.net site.
  • Thanks to David Fowler for saying "it'll just take 10 minutes to add Service Bus." 
  • Thanks to Eric Hexter and Jon Galloway for their organizational abilities and generous gifts of time on all the *conf events!
  • But mostly, thanks to the speakers who volunteered their time and presented and the community who showed up to watch, interact and play with us!

Sponsor: The Windows Azure Developer Challenge is on.  Complete 5 programming challenges for a chance at spot prizes, Stage prizes and the Grand Prize. Over $16,000 is up for grabs with 65 chances to win!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Project-less scripted C# with ScriptCS and Roslyn

April 25, 2013 Comment on this post [31] Posted in NuGet | NuGetPOW | Open Source | VS2012
Sponsored By
ScriptCS inside of SublimeText2 with the ScriptCS package giving SyntaxHighlighting

Glenn Block is working on something interesting that combines C#, NuGet, Roslyn (the new "compiler as a service") and his love of text editors and scripts. Now, with help from Justin Rusbatch (@jrusbatch) and Filip Wojcieszyn (@filip_woj) they are having all kinds of fun...using C# as a scripting language.

Every few years someone tries to turn C# into a competent scripting world, myself included. Often this has included batch files and MacGyver magic, file associations and hacks. Clearly the .NET community wants something like this, but we are collectively still trying to figure out what it should look like. PowerShell aficionados - and I count myself amongst them - might look at such efforts as a threat or a pale reinvention of PowerShell, but the fact remains that C# at the command line, be it as a script or a REPL, is an attractive concept.

Simply put by example, ScriptCS lets me do this:

C:\temp>copy con hello.csx
Console.WriteLine("Pants");
^Z
1 file(s) copied.

C:\temp>scriptcs hello.csx
Pants

That's Hello World. There's no namespace, no class, just some C# in a .csx file. Roslyn takes care of the compilation and the resulting code and .exe never hits the disk.

Self-hosting Web APIs

So that's interesting, but what about bootstrapping a web server using NancyFX to host a Web API?

Go and clone this repo:

git clone https://github.com/scriptcs/scriptcs-samples.git

Look in the Nancy folder. There's a packages.config. Just like a node.js application has a packages.json file with the dependencies in has, a .NET app usually has a packages.config with the name. In node, you type npm install to restore those packages from the main repository. Here I'll type scriptcs -install...

C:\temp\scriptcs-samples\nancy>scriptcs -install
Installing packages...
Installed: Nancy.Hosting.Self 0.16.1.0
Installed: Nancy.Bootstrappers.Autofac 0.16.1.0
Installed: Autofac 2.6.3.862
Installation successful.

Now, running start.csx fires up an instance of Nancy listening on localhost:1234. There's no IIS, no ASP.NET.

C:\temp\scriptcs-samples\nancy>scriptcs start.csx
Found assembly reference: Autofac.Configuration.dll
Found assembly reference: Autofac.dll
Found assembly reference: Nancy.Bootstrappers.Autofac.dll
Found assembly reference: Nancy.dll
Found assembly reference: Nancy.Hosting.Self.dll
Nancy is running at http://localhost:1234/
Press any key to end

There is also the notion of a "ScriptPack" such that you can Require<T> a library and hide a lot of the bootstrapping and complexity. For example, I could start up WebAPI after installing a Web API package that includes some starter code. Note this is all from the command line. I'm using "copy con file" to get started.

C:\temp\foo>scriptcs -install ScriptCs.WebApi
Installing packages...
Installed: ScriptCs.WebApi
Installation completed successfully.
...snip...
Added ScriptCs.WebApi, Version 0.1.0, .NET 4.5
Packages.config successfully created!

C:\temp\foo>copy con start.csx
public class TestController : ApiController {
public string Get() {
return "Hello world!";
}
}

var webApi = Require<WebApi>();
var server = webApi.CreateServer("http://localhost:8080");
server.OpenAsync().Wait();

Console.WriteLine("Listening...");
Console.ReadKey();
server.CloseAsync().Wait();
^Z
1 file(s) copied.

C:\temp\foo>scriptcs start.csx
Found assembly reference: Newtonsoft.Json.dll
...snip...
Listening...

Pretty slick. Add in a little Live Reload-style action and we could have a very node-ish experience, all from the command line and from within your text editor of choice, except using C#.

Note that this is all using the same CLR and .NET that you've already got, running at full speed. Only the compilation is handled differently to give this script-like feel.

Installing ScriptCS

The easiest way to install and use ScriptCS is to use Chocolatey (a system-wide NuGet-based application/component installer. "Chocolatey NuGet," get it?) And yes, it's Chocolatey spelled incorrectly with an "-ey."

You can use Chocolatey to do things like "cinst 7zip" or "cinst git" but we'll be using it just to get ScriptCS set up. It's also easily removed if it freaks you out and it installs no services and won't change anything major up save your PATH.

First paste this into a cmd.exe prompt:

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%systemdrive%\chocolatey\bin

This will PowerShell, run https://chocolatey.org/install.ps1 and add Chocolatey to your path.

Then, run

cinst ScriptCS

Which will put ScriptCS in a path like C:\Chocolatey\lib\ScriptCs.0.0.0 while Chocolatey makes it available in your PATH.

Sublime Text or Visual Studio

You can get syntax highlighting for your CSX files inside of Sublime Text 2 with the "ScriptCS" package you can install from package control. If you're using Visual Studio you can get the Roslyn CTP to turn on CSX syntax highlighting.

You can use PackageControl in SublimeText2 and install the ScriptCS package

You can even debug your running ScriptCS projects by opening the ScriptCS.exe as a project. (Did you know you can open an EXE as a project?) Add the .csx script to the command line via Project Properties, drag in the scripts you're working on and debug away.

Debugging requires the Roslyn SDK, although personally, I've been doing just fine with scripts at the command line which requires nothing more than the basic install and a text editor.

It's not clear where ScriptCS is going, but it'll be interesting to see! Go get involved at scriptcs.net. This kind of stuff gets me excited about the prospect of a compiler as a service, and also cements my appreciation of C# as my enabling language of choice. Between C# and JavaScript, you can really get a lot done, pretty much anywhere.

I'll have a video walkthrough on how this works as I explain it to Rob Conery up on TekPub soon! (Here's a referral coupon for 20% off of Tekpub!)

What do you think?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.