Scott Hanselman

Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7

March 30, '11 Comments [10] Posted in ASP.NET | IIS | OData
Sponsored By

I'm working on a thing that uses an HttpWebRequest to talk to a backend WCF Data Service and it'd be ideal if the traffic was using HTTP Compression (gzip, deflate, etc).

On the client side, it's easy to just add code like this

request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate

or more manually

var request = HttpWebRequest.Create("http://foofoo");
request.Headers["Accept"] = "application/json";
request.Headers["Accept-Encoding"] = "gzip, deflate";

However, you need to make sure this is installed and turned on in IIS7 in you server.

Launch your IIS Manager and go to the Compression module.

Compression Button in IIS Manager

There's check boxes, but it's not installed you may see this yellow alert on the right side.

Compression Alert in IIS Manager

If it's not installed, go to the Server Manager, Roles, Web Server. Under Role Services, check your installed Roles. If Dynamic Compression isn't installed, click Add Roles and install it.

The Dynamic Compression module in IIS manager is installed

You can go back to compression for your site and ensure Dynamic Compression is checked. At this point, Dynamic Compression should be setup, but you really need to be specific about what mimeTypes will be compressed.

Back in IIS Manager, go to the page for the SERVER, not the SITE. Click on Configuration Editor:

The Configuration Editor in IIS Manager

From the dropdown, select system.webServer/httpCompression:

Selecting the httpCompression node in the Configuration Editor in IIS Manager

Then click on Dynamic Types and now that you're in the list editor, think about what types you want compressed. By default */* is False, but you could just turn that on. I chose to be a little more picky and added application/atom+xml, application/json, and application/atom+xml;charset=utf-8 as seen below. It's a little gotcha that application/atom+xml and application/atom+xml;charset=utf-8 are separate entries. Feel free to add what ever mimeTypes you like in here.

Adding MimeTypes graphically in IIS Manager

After you've added them and closed the dialog, be sure to click Apply and Restart your IIS Service to load the new module.

GUIs suck! Command Lines Rule!

If you find all this clicking and screenshots offensive, then do it all from the command line using AppCmd for IIS7. That's lovely also.

appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/atom%u002bxml',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/atom%u002bxml;charset=utf-8',enabled='True']" /commit:apphost

Do check your ApplicationHost.config after running this command. See how we had to escape the "+" in "atom+xml" above and made it "atom%u002bxml"? Make sure that + got into your applicationHost.config unescaped. It should look like this in the System.webServer/httpCompression section.





Now, use Fiddler to confirm that compression is turned on by sending a request with the header "Accept-Encoding: gzip, deflate" included. Make sure that Fiddler's "AutoDecode" and Transforms are turned off if you really want to be sure you're looking at the raw stuff.

Confirming in Fiddler that gzip compression is turned on

Turning on Compression is a VERY low effort and VERY high reward thing to do on your servers, presuming they aren't already totally CPU-bound. If you're doing anything with phones or services over low-bandwidth 3G or EDGE networks, it's a total no brainer. Make sure you know what's compressed on your systems and what's not, and if not, why not.

Be explicit and know what your system/sites HTTP Headers are doing. Compression is step 0 in service optimization. I think I mentioned this in 2004. :)

Enjoy!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

The Weekly Source Code 56 - Visual Studio 2010 and .NET Framework 4 Training Kit - Code Contracts, Parallel Framework and COM Interop

August 12, '10 Comments [11] Posted in ASP.NET | ASP.NET Ajax | ASP.NET Dynamic Data | ASP.NET MVC | BCL | Learning .NET | LINQ | OData | Open Source | Programming | Source Code | VB | Web Services | Win7 | Windows Client | WPF
Sponsored By

Do you like a big pile of source code? Well, there is an imperial buttload of source in the Visual Studio 2010 and .NET Framework 4 Training Kit. It's actually a 178 meg download, which is insane. Perhaps start your download now and get it in the morning when you get up. It's extremely well put together and I say Kudos to the folks that did it. They are better people than I.

I like to explore it while watching TV myself and found myself looking through tonight. I checked my blog and while I thought I'd shared this with you before, Dear Reader, I hadn't. My bad, because it's pure gold. With C# and VB, natch.

Here's an outline of what's inside. I've heard of folks setting up lunch-time study groups and going through each section.

C# 4 Visual Basic 10 
F# Parallel Extensions
Windows Communication Foundation Windows Workflow
Windows Presentation Foundation ASP.NET 4
Windows 7 Entity Framework
ADO.NET Data Services (OData) Managed Extensibility Framework
Visual Studio Team System RIA Services
Office Development  

I love using this kit in my talks, and used it a lot in my Lap Around .NET 4 talk.

There's Labs, Presentations, Demos, Labs and links to online Videos. It'll walk you step by step through loads of content and is a great starter if you're getting into what's new in .NET 4.

Here's a few of my favorite bits, and they aren't the parts you hear the marketing folks gabbing about.

Code Contracts

Remember the old coding adage to "Assert Your Expectations?" Well, sometimes Debug.Assert is either inappropriate or cumbersome and what you really need is a method contract. Methods have names and parameters, and those are contracts. Now they can have conditions like "don't even bother calling this method unless userId is greater than or equal to 0 and make sure the result isn't null!

Code Contracts continues to be revised, with a new version out just last month for both 2008 and 2010. The core types that you need are included in mscorlib with .NET 4.0, but you do need to download the tools to see them inside Visual Studio. If you have VS Pro, you'll get runtime checking and VS Ultimate gets that plus static checking. If I have static checking and the tools I'll see a nice new tab in Project Properties:

Code Contracts Properties Tab in Visual Studio

I can even get Blue Squigglies for Contract Violations as seen below.

A blue squigglie showing that a contract isn't satisfied

As a nice coincidence, you can go and download Chapter 15 of Jon Skeet's C# in Depth for free which happens to be on Code Contracts.

Here's a basic idea of what it looks like. If you have static analysis, you'll get squiggles on the lines I've highlighted as they are points where the Contract isn't being fulfilled. Otherwise you'll get a runtime ContractException. Code Contracts are a great tool when used in conjunction with Test Driven Development.

using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics.Contracts;

namespace ContractsDemo
{
[ContractVerification(true)]
class Program
{
static void Main(string[] args)
{
var password = GetPassword(-1);
Console.WriteLine(password.Length);
Console.ReadKey();
}

#region Header
/// <param name="userId">Should be greater than 0</param>
/// <returns>non-null string</returns>
#endregion
static string GetPassword(int userId)
{
Contract.Requires(userId >= 0, "UserId must be");
Contract.Ensures(Contract.Result<string>() != null);

if (userId == 0)
{
// Made some code to log behavior

// User doesn't exist
return null;
}
else if (userId > 0)
{
return "Password";
}

return null;
}
}
}

COM Interop sucks WAY less in .NET 4

I did a lot of COM Interop back in the day and it sucked. It wasn't fun and you always felt when you were leaving managed code and entering COM. You'd have to use Primary Interop Assemblies or PIAs and they were, well, PIAs. I talked about this a little bit last year in Beta 1, but it changed and got simpler in .NET 4 release.

Here's a nice little sample I use from the kit that gets the Processes on your system and then makes a list with LINQ of the big ones, makes a chart in Excel, then pastes the chart into Word.

If you've used Office Automation from managed code before, notice that you can say Range[] now, and not get_range(). You can call COM methods like ChartWizard with named parameters, and without including Type.Missing fifteen times. As an aside, notice also the default parameter value on the method.

static void GenerateChart(bool copyToWord = false)
{
var excel = new Excel.Application();
excel.Visible = true;
excel.Workbooks.Add();

excel.Range["A1"].Value2 = "Process Name";
excel.Range["B1"].Value2 = "Memory Usage";

var processes = Process.GetProcesses()
.OrderByDescending(p => p.WorkingSet64)
.Take(10);
int i = 2;
foreach (var p in processes)
{
excel.Range["A" + i].Value2 = p.ProcessName;
excel.Range["B" + i].Value2 = p.WorkingSet64;
i++;
}

Excel.Range range = excel.Range["A1"];
Excel.Chart chart = (Excel.Chart)excel.ActiveWorkbook.Charts.Add(
After: excel.ActiveSheet);

chart.ChartWizard(Source: range.CurrentRegion,
Title: "Memory Usage in " + Environment.MachineName);

chart.ChartStyle = 45;
chart.CopyPicture(Excel.XlPictureAppearance.xlScreen,
Excel.XlCopyPictureFormat.xlBitmap,
Excel.XlPictureAppearance.xlScreen);

if (copyToWord)
{
var word = new Word.Application();
word.Visible = true;
word.Documents.Add();

word.Selection.Paste();
}
}

You can also embed your PIAs in your assemblies rather than carrying them around and the runtime will use Type Equivalence to figure out that your embedded types are the same types it needs and it'll just work. One less thing to deploy.

Parallel Extensions

The #1 reason, IMHO, to look at .NET 4 is the parallelism. I say this not as a Microsoft Shill, but rather as a dude who owns a 6-core (12 with hyper-threading) processor. My most favorite app in the Training Kit is ContosoAutomotive. It's a little WPF app that loads a few hundred thousand cars into a grid. There's an interface, ICarQuery, that a bunch of plugins implement, and the app foreach's over the CarQueries.

This snippet here uses the new System.Threading.Task stuff and makes a background task. That's all one line there, from StartNew() all the way to the bottom. It says, "do this chunk in the background." and it's a wonderfully natural and fluent interface. It also keeps your UI thread painting so your app doesn't freeze up with that "curtain of not responding" that one sees all the time.

private void RunQueries()
{
this.DisableSearch();
Task.Factory.StartNew(() =>
{
this.BeginTiming();
foreach (var query in this.CarQueries)
{
if (this.searchOperation.Token.IsCancellationRequested)
{
return;
}

query.Run(this.cars, true);
};
this.EndSequentialTiming();
}, this.searchOperation.Token).ContinueWith(_ => this.EnableSearch());
}

StartNew() also has a cancellation token that we check, in case someone clicked Cancel midway through, and there's a ContinueWith at the end that re-enables or disabled Search button.

Here's my system with the queries running. This is all in memory, generating and querying random cars.12% CPU across 12 processors single threaded

And the app says it took 2.3 seconds. OK, what if I do this in parallel, using all the processors?

2.389 seconds serially

Here's the changed code. Now we have a Parallel.ForEach instead. Mostly looks the same.

private void RunQueriesInParallel()
{
this.DisableSearch();
Task.Factory.StartNew(() =>
{
try
{
this.BeginTiming();
var options = new ParallelOptions() { CancellationToken = this.searchOperation.Token };
Parallel.ForEach(this.CarQueries, options, (query) =>
{
query.Run(this.cars, true);
});
this.EndParallelTiming();
}
catch (OperationCanceledException) { /* Do nothing as we cancelled it */ }
}, this.searchOperation.Token).ContinueWith(_ => this.EnableSearch());
}

This code says "go do this in a background thread, and while you're there, parallelize this as you like." This loop is "embarrassingly parallel." It's a big for loop over 2 million cars in memory. No reason it can't be broken apart and made faster.

Here's the deal, though. It was SO fast, that Task Manager didn't update fast enough to show the work. The work was too easy. You can see it used more CPU and that there was a spike of load across 10 of the 12, but the work wasn't enough to peg the processors.

19% load across 12 processors 

Did it even make a difference? Seems it was 5x faster and went from 2.389s to 0.4699 seconds. That's embarrassingly parallel. The team likes to call that "delightfully parallel" but I prefer "you're-an-idiot-for-not-doing-this-in-parallel parallel," but that was rejected.

0.4699 seconds when run in parallel. A 5x speedup.

Let's try something harder. How about a large analysis of Baby Names. How many Roberts born in the state of Washington over a 40 year period from a 500MB database?

Here's the normal single-threaded foreach version in Task Manager:

One processor chilling.

Here's the parallel version using 96% CPU.

6 processes working hard!

And here's the timing. Looks like the difference between 20 seconds and under 4 seconds.

PLINQ Demo

You can try this yourself. Notice the processor slider bar there at the bottom.

ProcessorsToUse.Minimum = 1;
ProcessorsToUse.Maximum = Environment.ProcessorCount;
ProcessorsToUse.Value = Environment.ProcessorCount; // Use all processors.

This sample uses "Parallel LINQ" and here's the two queries. Notice the "WithDegreeofParallelism."

seqQuery = from n in names
where n.Name.Equals(queryInfo.Name, StringComparison.InvariantCultureIgnoreCase) &&
n.State == queryInfo.State &&
n.Year >= yearStart && n.Year <= yearEnd
orderby n.Year ascending
select n;

parQuery = from n in names.AsParallel().WithDegreeOfParallelism(ProcessorsToUse.Value)
where n.Name.Equals(queryInfo.Name, StringComparison.InvariantCultureIgnoreCase) &&
n.State == queryInfo.State &&
n.Year >= yearStart && n.Year <= yearEnd
orderby n.Year ascending
select n;

The .NET 4 Training Kit has Extensibility demos, and Office Demos and SharePoint Demos and Data Access Demos and on and on. It's great fun and it's a classroom in a box. I encourage you to go download it and use it as a teaching tool at your company or school. You could do brown bags, study groups, presentations (there's lots of PPTs), labs and more.

Hope you enjoy it as much as I do.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

OData Basics - At the AZGroups "Day of .NET" with ScottGu

May 25, '10 Comments [14] Posted in OData | Speaking
Sponsored By

Recently I had the pleasure to speak at the 7th Annual AZGroups.org event in Phoenix, colloquially known as the "Day of ScottGu." Scott talked for about 4 hours or so, then Jeffrey Palermo, then myself. Tough acts to follow! You can view ScottGu's and Jeffrey's talks at http://azgroups.nextslide.com, and mine is here via direct link, and also embedded below.

I spoke on OData and it was a great crowd. We had a blast. I'd encourage you to check out the talks, as there's lots of good information and demos. Thank you to Scott Cate for putting the whole thing together, and be sure to check out Scott Cate's VS Trips and Tricks videos, as he does tiny screencast versions of Sara Ford's VS tips. Is three Scotts enough for you?

Enjoy!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Creating an OData API for StackOverflow including XML and JSON in 30 minutes

March 28, '10 Comments [65] Posted in ASP.NET | OData | Open Source | Source Code
Sponsored By

I emailed Jeff Atwood last night a one line email. "You should make a StackOverflow API using OData." Then I realized that, as Linus says, Talk is Cheap, Show me the Code. So I created an initial prototype of a StackOverflow API using OData on an Airplane. I allocated the whole 12 hour flight. Unfortunately it took 30 minutes so I watched movies the rest of the time.

You can follow along and do this yourself if you like.

Preparation

Before I left for my flight, I downloaded two things.

First, I got Sam Saffron's "So Slow" StackOverflow SQL Server Importer. This is a little spike of Sam's that takes the 3gigs of XML Dump Files from StackOverflow's monthly dump and imports it into SQL Server.

Second, I got the StackOverflow Monthly Dump. I downloaded it with uTorrent and unzipped it in preparation for the flight.

Importing into SQL Server

I went into Visual Studio 2010 (although I could have used 2008, I like the Entity Framework improvements in 2010 enough that it made this job easier). I right clicked on the Data Connections node in the Server Explorer and created a database in SQL Express called, ahem, "StackOverflow."

 Create New SQL Server Database

Next, I opened up Sam's RecreateDB.sql file from his project in Visual Studio (I avoid using SQL Server Management Studio when I can) and connected to the ".\SQLEXPRESS" instance, selected the new StackOverflow database and hit "execute."

Recreate DB SQL inside of Visual Studio

One nit about Sam's SQL file, it creates tables that line up nicely with the dump, but it includes no referential integrity. The tables don't know about each other and there's no cardinality setup. I've overwritten the brain cells in my head that know how to do that stuff without Google Bing so I figured I'd deal with it later. You will too.

Next, I opened Sam's SoSlow application and ran it. Lovely little app that works as advertised with a gloriously intuitive user interface. I probably would have named the "Import" button something like "Release the Hounds!" but that's just me.

So Slow ... Stack Overflow database importer

At this point I have a lovely database of a few hundred megs filled with StackOverflow's public data.

image

Making a Web Project and an Entity Model

Now, from within Visual Studio I selected File | New Project | ASP.NET Web Application. Then I right clicked on the resulting project and selected Add | New Item, then clicked Data, then ADO.NET Entity Data Model.

Add New Item - StackOveflow

What's the deal with that, Hanselman? You know StackOverflow uses LINQ to SQL? Have you finally sold out and are trying to force Entity Framework on us sneakily within this cleverly disguised blog post?

No. I used EF for a few reasons. One, it's fast enough (both at runtime and at design time) in Visual Studio 2010 that I don't notice the difference anymore. Two, I knew that the lack of formal referential integrity was going to be a problem (remember I mentioned that earlier?) and since LINQ to SQL is 1:1 physical/logical and EF offers flexible mapping, I figured it be easier with EF. Thirdly, "WCF Data Services" (the data services formerly known as ADO.NET Data Services or "Astoria") maps nicely to EF.

I named it StackOverflowEntities.edmx and selected "Update Model from Database" and selected all the tables just to get started. When the designer opened, I noticed there were no reference lines, just tables in islands by themselves.

The Initial Entity Model

So I was right about there being no relationships between the tables in SQL Server. If I was a smarter person, I'd have hooked up the SQL to include these relationships, but I figured I could add them here as well as a few other things that would make our OData Service more pleasant to use.

I started by looking at Posts and thinking that if I was looking at a Post in this API, I'd want to see Comments. So, I right-clicked on a Post and click Add | Association. The dialog took me a second to understand (I'd never seen it before) be then I realized that it was creating an English sentence at the bottom, so I just focused on getting that sentence correct.

In this case, "Post can have * (Many) instances of Comment. Use Post.Comments to access the Comment instances. Comment can have 1 (One) instance of Post. Use Comment.Post to access the Post instance." was exactly what I wanted. I also already had the foreign keys properties, so I unchecked that and clicked OK.

Add Association 

That got me here in the Designer. Note the line with the 1...* and the Comments Navigation Property on Post and the Post Navigation Property on Comment. That all came from that dialog.

Posts relate to Comments

Next, I figured since I didn't have it auto-generate the foreign key properties, I'd need to map them myself. I double clicked on the Association Line. I selected Post as the Principal and mapped its Id to the PostId property in Comments.

Referential Constraint

Having figured this out, I just did the same thing a bunch more times for the obvious stuff, as seen in this diagram where Users have Badges, and Posts have Votes, etc.

A more complete StackOverflow Entity Model with associations completed

Now, let's make a service.

Creating an OData Service

Right-click on the Project in Solution Explorer and select Add | New Item | Web | WCF Data Service. I named mine Service.svc. All you technically need to do to have a full, working OData service is add a class in between the angle brackets (DataService<YourTypeHere>) and include one line for config.EntitySetAccessRule. Here's my initial minimal class. I added the SetEntitySetPageSize after I tried to get all the posts. ;)

public class Service : DataService<StackOverflowEntities>
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);

//Set a reasonable paging site
config.SetEntitySetPageSize("*", 25);

config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
}

Expanding on this class, I added caching, and an example Service Operation, as well as WCF Data Services support for JSONP. Note that the Service Operation is just an example there to show StackOverflow that they CAN have total control. Using OData doesn't mean checking a box and putting your database on the web. It means exposing specific entities with as much or as little granularity as you like. You can intercept queries, make custom behaviors (like the JSONP one), make custom Service Operations (they can include query strings, of course), and much more. OData supports JSON natively and will return JSON when an accept: header is set, but I added the JSONP support to allow cross-domain use of the service as well as allow the format parameter in the URL, which is preferred by man as it's just easier.

namespace StackOveflow
{
[JSONPSupportBehavior]
public class Service : DataService<StackOverflowEntities>
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);

//This could be "*" and could also be ReadSingle, etc, etc.
config.SetServiceOperationAccessRule("GetPopularPosts", ServiceOperationRights.AllRead);

//Set a reasonable paging site
config.SetEntitySetPageSize("*", 25);

config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}

protected override void OnStartProcessingRequest(ProcessRequestArgs args)
{
base.OnStartProcessingRequest(args);
//Cache for a minute based on querystring
HttpContext context = HttpContext.Current;
HttpCachePolicy c = HttpContext.Current.Response.Cache;
c.SetCacheability(HttpCacheability.ServerAndPrivate);
c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60));
c.VaryByHeaders["Accept"] = true;
c.VaryByHeaders["Accept-Charset"] = true;
c.VaryByHeaders["Accept-Encoding"] = true;
c.VaryByParams["*"] = true;
}

[WebGet]
public IQueryable<Post> GetPopularPosts()
{
var popularPosts = (from p in this.CurrentDataSource.Posts
orderby p.ViewCount
select p).Take(20);

return popularPosts;
}
}
}

But what does this get us? So what?

Accessing StackOverflow's Data via OData

Well, if I hit http://mysite/service.svc I see this service. Note the relative HREFs.

Screenshot of an XML document describing an OData service endpoint

If I hit http://173.46.159.103/service.svc/Posts I get the posts (paged, as I mentioned). Look real close in there. Notice the <link> stuff before the content? Notice the relative href="Posts(23)"?

StackOverflow Posts in OData

Remember all those associations I set up before? Now I can see:

But that's just navigation. I can also do queries. Go download LINQPad Beta for .NET 4. Peep this. Click on Add Connection, and put in my little Orcsweb test server.

Disclaimer: This is a test server that Orcsweb may yank at any moment. Note also, that you can sign up for your own at http://www.vs2010host.com or find a host at ASP.NET or host your own OData in the cloud.

I put this in and hit OK.

LINQPad Connection String

Now I'm writing LINQ queries against StackOverflow over the web. No Twitter-style API, JSON or otherwise can do this. StackOverflow data was meant for OData. The more I mess around with this, the more I realize it's true.

LINQPad 4

This LINQ query actually turns into this URL. Again, you don't need .NET for this, it's just HTTP:

',Tags)">',Tags)">http://173.46.159.103/service.svc/Posts()?$filter=substringof('SQL',Title) or substringof('<sql-server>',Tags)

Try the same thing with an accept header of accept: application/json or just add $format=json

',Tags)&$format=json">',Tags)&$format=json">http://173.46.159.103/service.svc/Posts()?$filter=substringof('SQL',Title) or substringof('<sql-server>',Tags)&$format=json

It'll automatically return the same data as JSON or Atom, as you like.

If you've got Visual Studio, just go bust out a Console App real quick. File | New Console App, then right-click in references and hit Add Service Reference. Put in http://173.46.159.103/service.svc and hit OK.

Add Service Reference

Try something like this. I put the URIs in comments to show you there's no trickery.

class Program
{
static void Main(string[] args)
{
StackOverflowEntities so = new StackOverflowEntities(new Uri("http://173.46.159.103/service.svc"));

//{http://173.46.159.103/service.svc/Users()?$filter=substringof('Hanselman',DisplayName)}
var user = from u in so.Users
where u.DisplayName.Contains("Hanselman")
select u;

//{http://173.46.159.103/service.svc/Posts()?$filter=OwnerUserId eq 209}
var posts =
from p in so.Posts
where p.OwnerUserId == user.Single().Id
select p;

foreach (Post p in posts)
{
Console.WriteLine(p.Body);
}

Console.ReadLine();
}
}

I could keep going with examples in PHP, JavaScript, etc, but you get the point.

Conclusion

StackOverflow has always been incredibly open and generous with their data. I propose that an OData endpint would give us much more flexible access to their data than a custom XML and/or JSON API that they'll need be constantly rev'ing.

With a proprietary API, folks will rush to create StackOverflow clients in many languages, but that work is already done with OData including libraries for iPhone, PHP and Java. There's a growing list of OData SDKs that could all be used to talk to a service like this. I could load it into Excel using PowerPivot if I like as well.

Also, this service could totally be extended beyond this simple GET example. You can do complete CRUD with OData and it's not tied to .NET in anyway. TweetDeck for StackOverflow perhaps?

I propose we encourage StackOverflow to put more than the 30 minutes that I have put into it and make a proper OData service for their data, rather than a custom API. I volunteer to help. If not, we can do it ourselves with their dump data (perhaps weekly if they can step it up?) and a cloud instance.

Thoughts?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Mix 10 Rollup Post

March 17, '10 Comments [21] Posted in ASP.NET MVC | Mix | OData | Silverlight | WinPhone
Sponsored By

Piles of interesting stuff going on at Mix 10 this week. Here's a link rollup with all the details and downloads that you might care about. (As well as a few blatant plugs for my own sessions.)

First: Watch Day 1 and Day 2 keynotes on demand. See recorded sessions here.

Personally, I presented:

Enjoy! I'll update this post as videos become available. I've still got a panel and a talk left!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web
Page 1 of 2 in the OData category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.