Scott Hanselman

Infinite Scroll WebSites via AutoPagerize - Hacky, but the beginning of something cool

April 6, '10 Comments [20] Posted in ASP.NET
Sponsored By

One of the things I like the most about Bing Image search (one of the things I prefer about it) is the "infinite scroll" feature. If you search for an image and start scrolling, it'll just keep going and going, moving the scroll bar each time and appending new images on the bottom. This concept of "infinite scroll" has been called just that, as well as endless pages, autopagerize, etc. There's even a jQuery plugin called Infinite Scroll if you want to enable something like this on your site programmatically.

oscar winners - Bing Images - Windows Internet Explorer

However, there's also been a quiet revolution on sites, and to some extent, in browsers to make infinite scroll a standard thing. At least, a de facto standard, and you can enable it on your site with minimal effort.

The general idea is that the browser notices that you're scrolling to the end and rather than making you click, it'll fetch the next page via AJAX and append it to the page you're already on. This screenshot from the AutoPagerize for Chrome extension shows it best:

How Autopagerize works

There's a few things needed and it requires a bit of dancing on your part to make it happen.

Enabling Autopagerize as a Browser of the Web

For the longest time Autopagerize has been a "Greasemonkey script." Greasemonkey is an add-on itself that enable others add-ons, via easy scripts, to dramatically change the behavior of your browser. I'm not a huge fan myself, as I have some security concerns. The main site that promotes this is a bit dodgy looking, at but their Extension for FireFox works and they mean well.

Enabling Autopagerize on Firefox

You can use GreaseMonkey and the AutoPagerize userscript if you like, but I use the AutoPagerize Firefox Extension from

Enabling Autopagerize on Opera

Opera supports "User JavaScript" out of the box, so you can get their oAutoPagerize script, follow some directions and you're all set. It's a modification of the standard GreaseMonkey script and it will work with Safari and GreaseKit and Chrome, although I recommend the cleaner Chrome extension.

Enabling Autopagerize on Chrome

Chrome has a Chrome Extension called, logically enough, AutoPagerize for Chrome. It has the benefit of a small colored square in the address bar that will show you if the current page is enabled for paging and the current status.

I'm still looking into a reliable way to do this on IE, but you can start with the older GreaseMonkey for IE addon.

Enabling Autopagerize as a Web Site (Blog, etc) Publisher

Here's what it gets insane. Like "horribly gross and this will never scale" insane. There's two ways. If there are children in the room who design for the web, please ask them to leave.

First, you can go to this online database of sites and add your site along with some *cough* regular expressions and XPath expressions that describe where the next page to retrieve is and what to append it to. Wow, Regular Expressions AND XPath? What, no "select * from authors"? And a centralized database. Good times.

Well, my record (and most DasBlog sites) looks like:

pageElement: id("blog-posts")

url: ^http://www\.hanselman\.com/

nextLink: //div[@class="previous-posts"]/a

It basically says, you can find the next link at the anchor after the div with the class "previous posts" and you can append it to the element with the id of "blog-posts."

So this is gross.

Second option, and more ideally, I'd say, is this microformat. I'll actually copy/paste the microformat from the GreaseMonkey script itself as it says it all:

url: '.*',
nextLink: '//a[@rel="next"] | //link[@rel="next"]',
insertBefore: '//*[contains(@class, "autopagerize_insert_before")]',
pageElement: '//*[contains(@class, "autopagerize_page_element")]',

It says, find either an anchor like <a href="..." rel="next"> or a link in the head like <link rel="next" href="..."> then retrieve the page. Take the element with class "autopagerize_page_element" and append it to the element with "autopagerize_insert_before."

If your site/blog just adds a few classes and this rel, it'll be automatically setup to support autopagerize. I wanted to site my site like this but I hit a wall in the extensibility of DasBlog, the blog engine I run. This would be a small change to DasBlog, but it would mean a new version.

Of course, no browser supports this out of the box yet. Opera does offer a similar feature called "Fast Forward" that extends spacebar scrolling (in all browsers you can just press the spacebar to scroll down a page) such that it will navigate to the next page when you hit the bottom. Per Opera's KB:

Fast Forward tries to analyze a page and looks for links that will take you to the next page, for example after a search with Google with several pages of search results. It looks for certain patterns that indicate a "next" link, or uses "<link rel="next">" if it is defined in the page.

Unfortunately Opera analyzes my page and gets it wrong, selecting, oddly enough, an image as the next page to go to. This would likely be solved if I added a <link rel="next"> to my page's head, although again, I'd have to do this dynamically.

As an aside, notice this comment from Opera on their KB...

Please note that Fast Forward does not use any external services to determine the next page. It only looks at the current page and tries to find things that indicate that there is a "next" page. It does not look it up from an external server or contact any site to get this info.

This means they, too, realize that an external service is folly and the only way for this to work going forward is via microformats. I fervently agree.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Hanselminutes Podcast 208 - Social Media and the Business of Social - The Wynn Resorts

April 5, '10 Comments [1] Posted in Podcast
Sponsored By

Wynn My two-hundred-and-eighth podcast is up. Scott sits down with Jade Bailey, who manages social media and online services for the Wynn in Las Vegas. How does a world wide brand use social media to serve its customers while still remaining authentic? Is Twitter a legitimate customer service choice? Does a company need a Facebook page?

Subscribe: Subscribe to Hanselminutes Subscribe to my Podcast in iTunes

Download: MP3 Full Show

Links from the Show

Do also remember the complete archives are always up and they have PDF Transcripts, a little known feature that show up a few weeks after each show.

I want to add a big thanks to Telerik. Without their support, there wouldn't be a Hanselminutes. I hope they, and you, know that. Someone's gotta pay the bandwidth. Thanks also to Carl Franklin for all his support over these last 4 years!

Telerik is our sponsor for this show.

Building quality software is never easy. It requires skills and imagination. We cannot promise to improve your skills, but when it comes to User Interface and developer tools, we can provide the building blocks to take your application a step closer to your imagination. Explore the leading UI suites for ASP.NET AJAX,MVC,Silverlight,Windows Forms and WPF. Enjoy developer tools like .NET reporting, ORM,Automated Testing Tools, TFS, and Content Management Solution. And now you can increase your productivity with JustCode, Telerik’s new productivity tool for code analysis and refactoring. Visit

As I've said before this show comes to you with the audio expertise and stewardship of Carl Franklin. The name comes from Travis Illig, but the goal of the show is simple. Avoid wasting the listener's time. (and make the commute less boring)

Enjoy. Who knows what'll happen in the next show?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Creating an OData API for StackOverflow including XML and JSON in 30 minutes

March 28, '10 Comments [65] Posted in ASP.NET | OData | Open Source | Source Code
Sponsored By

I emailed Jeff Atwood last night a one line email. "You should make a StackOverflow API using OData." Then I realized that, as Linus says, Talk is Cheap, Show me the Code. So I created an initial prototype of a StackOverflow API using OData on an Airplane. I allocated the whole 12 hour flight. Unfortunately it took 30 minutes so I watched movies the rest of the time.

You can follow along and do this yourself if you like.


Before I left for my flight, I downloaded two things.

First, I got Sam Saffron's "So Slow" StackOverflow SQL Server Importer. This is a little spike of Sam's that takes the 3gigs of XML Dump Files from StackOverflow's monthly dump and imports it into SQL Server.

Second, I got the StackOverflow Monthly Dump. I downloaded it with uTorrent and unzipped it in preparation for the flight.

Importing into SQL Server

I went into Visual Studio 2010 (although I could have used 2008, I like the Entity Framework improvements in 2010 enough that it made this job easier). I right clicked on the Data Connections node in the Server Explorer and created a database in SQL Express called, ahem, "StackOverflow."

 Create New SQL Server Database

Next, I opened up Sam's RecreateDB.sql file from his project in Visual Studio (I avoid using SQL Server Management Studio when I can) and connected to the ".\SQLEXPRESS" instance, selected the new StackOverflow database and hit "execute."

Recreate DB SQL inside of Visual Studio

One nit about Sam's SQL file, it creates tables that line up nicely with the dump, but it includes no referential integrity. The tables don't know about each other and there's no cardinality setup. I've overwritten the brain cells in my head that know how to do that stuff without Google Bing so I figured I'd deal with it later. You will too.

Next, I opened Sam's SoSlow application and ran it. Lovely little app that works as advertised with a gloriously intuitive user interface. I probably would have named the "Import" button something like "Release the Hounds!" but that's just me.

So Slow ... Stack Overflow database importer

At this point I have a lovely database of a few hundred megs filled with StackOverflow's public data.


Making a Web Project and an Entity Model

Now, from within Visual Studio I selected File | New Project | ASP.NET Web Application. Then I right clicked on the resulting project and selected Add | New Item, then clicked Data, then ADO.NET Entity Data Model.

Add New Item - StackOveflow

What's the deal with that, Hanselman? You know StackOverflow uses LINQ to SQL? Have you finally sold out and are trying to force Entity Framework on us sneakily within this cleverly disguised blog post?

No. I used EF for a few reasons. One, it's fast enough (both at runtime and at design time) in Visual Studio 2010 that I don't notice the difference anymore. Two, I knew that the lack of formal referential integrity was going to be a problem (remember I mentioned that earlier?) and since LINQ to SQL is 1:1 physical/logical and EF offers flexible mapping, I figured it be easier with EF. Thirdly, "WCF Data Services" (the data services formerly known as ADO.NET Data Services or "Astoria") maps nicely to EF.

I named it StackOverflowEntities.edmx and selected "Update Model from Database" and selected all the tables just to get started. When the designer opened, I noticed there were no reference lines, just tables in islands by themselves.

The Initial Entity Model

So I was right about there being no relationships between the tables in SQL Server. If I was a smarter person, I'd have hooked up the SQL to include these relationships, but I figured I could add them here as well as a few other things that would make our OData Service more pleasant to use.

I started by looking at Posts and thinking that if I was looking at a Post in this API, I'd want to see Comments. So, I right-clicked on a Post and click Add | Association. The dialog took me a second to understand (I'd never seen it before) be then I realized that it was creating an English sentence at the bottom, so I just focused on getting that sentence correct.

In this case, "Post can have * (Many) instances of Comment. Use Post.Comments to access the Comment instances. Comment can have 1 (One) instance of Post. Use Comment.Post to access the Post instance." was exactly what I wanted. I also already had the foreign keys properties, so I unchecked that and clicked OK.

Add Association 

That got me here in the Designer. Note the line with the 1...* and the Comments Navigation Property on Post and the Post Navigation Property on Comment. That all came from that dialog.

Posts relate to Comments

Next, I figured since I didn't have it auto-generate the foreign key properties, I'd need to map them myself. I double clicked on the Association Line. I selected Post as the Principal and mapped its Id to the PostId property in Comments.

Referential Constraint

Having figured this out, I just did the same thing a bunch more times for the obvious stuff, as seen in this diagram where Users have Badges, and Posts have Votes, etc.

A more complete StackOverflow Entity Model with associations completed

Now, let's make a service.

Creating an OData Service

Right-click on the Project in Solution Explorer and select Add | New Item | Web | WCF Data Service. I named mine Service.svc. All you technically need to do to have a full, working OData service is add a class in between the angle brackets (DataService<YourTypeHere>) and include one line for config.EntitySetAccessRule. Here's my initial minimal class. I added the SetEntitySetPageSize after I tried to get all the posts. ;)

public class Service : DataService<StackOverflowEntities>
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);

//Set a reasonable paging site
config.SetEntitySetPageSize("*", 25);

config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;

Expanding on this class, I added caching, and an example Service Operation, as well as WCF Data Services support for JSONP. Note that the Service Operation is just an example there to show StackOverflow that they CAN have total control. Using OData doesn't mean checking a box and putting your database on the web. It means exposing specific entities with as much or as little granularity as you like. You can intercept queries, make custom behaviors (like the JSONP one), make custom Service Operations (they can include query strings, of course), and much more. OData supports JSON natively and will return JSON when an accept: header is set, but I added the JSONP support to allow cross-domain use of the service as well as allow the format parameter in the URL, which is preferred by man as it's just easier.

namespace StackOveflow
public class Service : DataService<StackOverflowEntities>
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);

//This could be "*" and could also be ReadSingle, etc, etc.
config.SetServiceOperationAccessRule("GetPopularPosts", ServiceOperationRights.AllRead);

//Set a reasonable paging site
config.SetEntitySetPageSize("*", 25);

config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;

protected override void OnStartProcessingRequest(ProcessRequestArgs args)
//Cache for a minute based on querystring
HttpContext context = HttpContext.Current;
HttpCachePolicy c = HttpContext.Current.Response.Cache;
c.VaryByHeaders["Accept"] = true;
c.VaryByHeaders["Accept-Charset"] = true;
c.VaryByHeaders["Accept-Encoding"] = true;
c.VaryByParams["*"] = true;

public IQueryable<Post> GetPopularPosts()
var popularPosts = (from p in this.CurrentDataSource.Posts
orderby p.ViewCount
select p).Take(20);

return popularPosts;

But what does this get us? So what?

Accessing StackOverflow's Data via OData

Well, if I hit http://mysite/service.svc I see this service. Note the relative HREFs.

Screenshot of an XML document describing an OData service endpoint

If I hit I get the posts (paged, as I mentioned). Look real close in there. Notice the <link> stuff before the content? Notice the relative href="Posts(23)"?

StackOverflow Posts in OData

Remember all those associations I set up before? Now I can see:

But that's just navigation. I can also do queries. Go download LINQPad Beta for .NET 4. Peep this. Click on Add Connection, and put in my little Orcsweb test server.

Disclaimer: This is a test server that Orcsweb may yank at any moment. Note also, that you can sign up for your own at or find a host at ASP.NET or host your own OData in the cloud.

I put this in and hit OK.

LINQPad Connection String

Now I'm writing LINQ queries against StackOverflow over the web. No Twitter-style API, JSON or otherwise can do this. StackOverflow data was meant for OData. The more I mess around with this, the more I realize it's true.


This LINQ query actually turns into this URL. Again, you don't need .NET for this, it's just HTTP:

',Tags)">',Tags)">$filter=substringof('SQL',Title) or substringof('<sql-server>',Tags)

Try the same thing with an accept header of accept: application/json or just add $format=json

',Tags)&$format=json">',Tags)&$format=json">$filter=substringof('SQL',Title) or substringof('<sql-server>',Tags)&$format=json

It'll automatically return the same data as JSON or Atom, as you like.

If you've got Visual Studio, just go bust out a Console App real quick. File | New Console App, then right-click in references and hit Add Service Reference. Put in and hit OK.

Add Service Reference

Try something like this. I put the URIs in comments to show you there's no trickery.

class Program
static void Main(string[] args)
StackOverflowEntities so = new StackOverflowEntities(new Uri(""));

var user = from u in so.Users
where u.DisplayName.Contains("Hanselman")
select u;

//{$filter=OwnerUserId eq 209}
var posts =
from p in so.Posts
where p.OwnerUserId == user.Single().Id
select p;

foreach (Post p in posts)


I could keep going with examples in PHP, JavaScript, etc, but you get the point.


StackOverflow has always been incredibly open and generous with their data. I propose that an OData endpint would give us much more flexible access to their data than a custom XML and/or JSON API that they'll need be constantly rev'ing.

With a proprietary API, folks will rush to create StackOverflow clients in many languages, but that work is already done with OData including libraries for iPhone, PHP and Java. There's a growing list of OData SDKs that could all be used to talk to a service like this. I could load it into Excel using PowerPivot if I like as well.

Also, this service could totally be extended beyond this simple GET example. You can do complete CRUD with OData and it's not tied to .NET in anyway. TweetDeck for StackOverflow perhaps?

I propose we encourage StackOverflow to put more than the 30 minutes that I have put into it and make a proper OData service for their data, rather than a custom API. I volunteer to help. If not, we can do it ourselves with their dump data (perhaps weekly if they can step it up?) and a cloud instance.


About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

FavIcons, Internet Zones and Projects from a Trustworthy Source

March 28, '10 Comments [5] Posted in Source Code | Tools | VS2010
Sponsored By

You ever download some code or a Visual Studio project from the web then start getting warned that the download might be evil?

When you open a project file that was downloaded from the Internet Zone, you'll get a dialog like this from Visual Studio:

Security Warning - You should only open projects from trustworthy source

Unblock downloaded zip fileIf you're wondering "how does it know?" when your browser downloads something from the net, the zone that it came from gets marked in an Alternate Data Stream within the NTFS file system.

If you use the Streams utility from SysInternals from the command-line you can see the streams on file directly. For example:


Streams v1.56 - Enumerate alternate NTFS data streams
Copyright (C) 1999-2007 Mark Russinovich
Sysinternals -

   :Zone.Identifier:$DATA       24

Even easier, you can right-click on the file and select properties. Notice that "Unblock" button at the bottom of the dialog. Click that, and you're declaring that you trust this file and by removing the alternate stream data you'll not get warned again.

It's not just projects that get these zone ideas, anything downloaded does, including videos, etc. These streams and zone data persist even when you move or rename files so that the system knows where things come from.

Some other programs put stuff in Alternate Data Streams, like older versions of ETrust Anti-Virus. But, after download files, the other most common use of Alternate Data streams is to store the favicon.ico of a URL file on your computer. You know those tiny icons that you see in the address bar when you surf around? Well, if you make a shortcut to a site on your desktop that .URL file will also include a small version of the favicon so you'll still be able to see it when you are offline.

Favicons in alternate data streams

And here's some of the same files using Streams.exe from the command line:

A Whiteboard and Audio Meeting Capture System.url:
         :favicon:$DATA 894
alastairs's buildprogress at master - GitHub.url:
         :favicon:$DATA 1150
Introducing Versatile DataSources - Peter and the case
of the ASP.NET developer.url:
         :favicon:$DATA 1406
johnvpetersen's Nerd-Dinner-with-Fluent-NHibernate at m
aster - GitHub.url:
         :favicon:$DATA 1150
jQuery Templates Proposal - jquery - GitHub.url:
         :favicon:$DATA 1150
metalscroll - Project Hosting on Google Code.url:
         :favicon:$DATA 1150

As with all things internet, of course, be aware and be smart when you download. Don't just Unblock stuff willy-nilly like me. :)

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

.NET 4 Web Application Startup Time

March 28, '10 Comments [15] Posted in ASP.NET | VS2010
Sponsored By

I was chatting with Jonathan Hawkins and some of the folks on the ASP.NET team about performance and Jonathan mentioned the startup time for large ASP.NET applications is improved on .NET 4. There are some improvements in the CLR and in ASP.NET itself that helped. If you have a giant app, you should do some tests.

The word from the ASP.NET team is that you'll see improvements when the .NET framework (CLR) is warm in memory but the Web App is cold on disk. This is for shared hosting scenarios where the web server is loading and unloading web applications while .NET remains in memory.

Web App Name




Change as % of NETFX3.5





















Median Improvement



Environment: NETFX v4.0, Win7-x86, 7200 RPM, 4GB, 2 Cores

If you're interested in one of the reasons, there's a switch in C:\Windows\Microsoft.NET\Framework\v4.0.xxxx\Aspnet.config called shadowCopyVerifyByTimestamp that ASP.NET uses to startup up the CLR. The CLR optimized in .NET 4 how shadow copy assemblies are loaded by removing an unnecessary file copy if nothing's changed. Hence, part of the improvement in cold web app startup.

What's the biggest ASP.NET application that you've got?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.