Scott Hanselman

Faster Builds with MSBuild using Parallel Builds and Multicore CPUs

April 24, '08 Comments [38] Posted in ASP.NET | Programming | Tools
Sponsored By

UPDATE: I've written an UPDATE on how to get MSBuild building using multiple cores from within Visual Studio. You might check that out when you're done here.

Jeff asked the question "Should All Developers Have Manycore CPUs?" this week. There are number of things in his post I disagree with.

First, "dual-core CPUs protect you from badly written software" is just a specious statement as it's the OS's job to protect you, it shouldn't matter how many cores there are.

Second, "In my opinion, quad-core CPUs are still a waste of electricity unless you're putting them in a server" is also silly. Considering that modern CPUs slow down when not being used, and use minimal electricity when compared to your desk lamp and monitors, I can't see not buying the best (and most) processors) that I can afford. The same goes with memory. Buy as much as you can comfortably afford. No one ever regretted having more memory, a faster CPU and a large hard drive.

Third he says,"there are only a handful of applications that can truly benefit from more than 2 CPU cores" but of course, if you're running a handful of applications, you can benefit even if they are not multi-threaded. Just yesterday I was rendering a DVD, watching Arrested Development, compiling an application, reading email while my system was being backed up by Home Server. This isn't an unreasonable amount of multitasking, IMHO, and this is why I have a quad-proc machine.

That said, the limits to which a machine can multi-task are often limited to the bottleneck that sits between the chair and keyboard. Jeff, of course, must realize this, so I'm just taking issue with his phrasing more than anything.

He does add the disclaimer, which is totally valid: "All I wanted to do here is encourage people to make an informed decision in selecting a CPU" and that can't be a bad thing.

MSBuild

Now, enough picking on Jeff, let's talk about my reality as a .NET Developer and a concrete reason I care about multi-core CPUs. Jeff compiled SharpDevelop using 2 cores and said "I see nothing here that indicates any kind of possible managed code compilation time performance improvement from moving to more than 2 cores."

When I compiled SharpDevelop via "MSBuild SharpDevelop.sln" (which uses one core) it took 11 seconds:

TotalMilliseconds : 11207.7979

Adding the /m:2 parameter to MSBuild yielded a 35% speed up:

TotalMilliseconds : 7190.3041

And adding /m:4 yielded (from 1 core) a a 59% speed up:

TotalMilliseconds : 4581.4157

Certainly when doing a command line build, why WOULDN'T I want to use all my CPUs? I can detect how many there are using an Environment Variable that is set automatically:

C:>echo %NUMBER_OF_PROCESSORS%
4

But if I just say /m to MSBuild like

MSBuild /m

It will automatically use all the cores on the system to create that many MSBuild processes in a pool as seen in this Task Manager screenshot:

Four MSBuild Processes

The MSBuild team calls these "nodes" because they are cooperating and act as a pool, building projects as fast as they can to the point of being I/O bound. You'll notice that their PIDs (Process IDs) won't change while they are living in memory. This means they are recycled, saving startup time over running MSBuild over and over (which you wouldn't want to do, but I've seen in the wild.)

You might wonder, why do we not just use one multithreaded process for MSBuild? Because each building project wants its own current directory (and potentially custom tasks expect this) and each PROCESS can only have one current directory, no matter how many threads exist.

When you run MSBuild on a SLN (Solution File) (which is NOT an MSBuild file) then MSBuild will create a "sln.cache" file that IS an MSBuild file.

Some folks like to custom craft their MSBuild files and others like to get the auto-generate one. Regardless, when you're calling an MSBuild task, one of the options that gets set is (from an auto-generated file):

<MSBuild Condition="@(BuildLevel1) != ''" Projects="@(BuildLevel1)" Properties="Configuration=%(Configuration); Platform=%(Platform); ...snip... BuildInParallel="true" UnloadProjectsOnCompletion="$(UnloadProjectsOnCompletion)" UseResultsCache="$(UseResultsCache)"> ...

When you indicate BuildInParallel you're asking for parallelism in building your Projects. It doesn't cause Task-level parallelism as that would require a task dependency tree and you could get some tricky problems as copies, etc, happened simultaneously.

However, Projects DO often have dependency on each other and the SLN file captures that. If you're using a Visual Studio Solution and you've used Project References, you've already given the system enough information to know which projects to build first, and which to wait on.

More Granularity (if needed)

If you are custom-crafting your MSBuild files, you could turn off parallelism on just certain MSBuild tasks by adding:

BuildInParallel=$(BuildInParallel)

to specific MSBuild Tasks and then just those sub-projects wouldn't build in parallel if you passed in a property from the command line:

MSBuild /m:4 /p:BuildInParallel=false

But this an edge case as far as I'm concerned.

How does BuildInParallel relate to the MSBuild /m Switch?

Certainly, if you've got a lot of projects that are mostly independent of each other, you'll get more speed up than if your solution's dependency graph is just a queue of one project depending on another all the way down the line.

In conclusion, BuildInParallel allows the MSBuild task to process the list of projects which were passed to it in a parallel fashion, while /m tells MSBuild how many processes it is allowed to start.

If you have multiple cores, you should be using this feature on big builds from the command line and on your build servers.

Thanks to Chris Mann and Dan Mosley for their help and corrections.

Related Links

Technorati Tags: ,,

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Books: We need more So What, Now What and What For? and less just What

April 24, '08 Comments [23] Posted in ASP.NET | Musings
Sponsored By

Have you ever looked at a word you use and see every day, like the word "What," and suddenly start doubting yourself and wonder if it's spelled correctly? There you go. What. Is THAT how it's spelled? That doesn't look right. Why would such a simple thing suddenly be called into question?

Anyway, there's a glut of technical books out there, and I'm starting to realize that I take their style and their existence for granted. They usually describe what some product is and what it does.

I really enjoy books/blogs and writing that spend less time on the What and more time on the So What? and Now What? and What For? I'd like to see more books that put technologies into a larger context and less on just What.

It'd be interesting to take any book title and add one of these phrases to the end, like "Professional ASP.NET 2.0, So What?" or "Ruby on Rails, Now What?" or "SQL Server, What For?"

What For?

The Ruby Way does a good job explaining What Ruby is Good For. It answers the Why, to me at least, very well. It's the best book I've found that doesn't just teach Ruby syntax, but also Ruby idioms; it's mastering idiomatic structures that brings fluency in any language, human or computer.

Programming Pearls - I used this book while teaching my C# Class at OIT even though it had nothing to do with C# because it's just that good. It's a series of collected articles from Communications of the ACM. It helped me understand what a number of things were for. I better understand computer problem solving, what Math is for (its relationship to programing, which isn't always clear to everyone)

How to be a Programmer (free) by Robert Read - This is a fun, short read that is general enough that it makes sense in most languages. If you want a CS101 (or 201) practical refresher, this is a great book that answers mostly "how to" questions, but ultimately explains what a number of technique are for.

So What?

Dissecting a C# Application: Inside SharpDevelop is a fantastic book and one of my favorites because it's a story that's full of So What? answers. They talk about what they tried, what failed, what didn't and how they moved forward. The Reviews on Amazon aren't all favorable as one reviewer says this is "an extremely difficult book from which to learn" but if you learn, as I do, from reading source with explanations of design decisions, then this (slightly old, but still useful) book is worth your time.

Programming WPF by Chris Sells and Ian Griffiths (his blog is a technical joy) is great because the authors are always trying to answer these "so what" questions. Chris is like this in real life. "So What?" is, I think, his number one question as he takes little for granted. If you read Ian's blog, you know his knowledge is deep to the point of being obscene (that's a good thing) and when you put these two guys together you get a great book that answers, for me, WPF, So What?

Now What?

OK, after I've learned a technology, Now What? What's the next step in my learning and what should I do?

The Pragmatic Programmer by Andrew Hunt and Dave Thomas is a book that answers a lot of Now What questions for me. It shows a complete, pragmatic attitude towards the software development lifecycle. I wouldn't say it's complete in an Enterprise sense, but good for medium-size projects, and most of the general kinds of things you're going to bump into.

In May, my friend and long-time compatriot Patrick Cauldwell will release Code Leader: Using People, Tools, and Processes to Build Successful Software which I read, enjoyed and happily wrote a foreword for. This is a book based on Patrick's "This I believe: The Developer Edition" post of last year. He takes many different tools and processes and answers the Now What? question.

Making Things Happen by Scott Berkun is an update to The Art of Project Management and I now have both editions. Scott answers lots of the Now What questions in a comfortable, conversational tone with a deep focus on getting things done in software development. He helps with planning and execution.

Your Turn

This is a woefully short list. Perhaps I'm missing something, but other than actually doing it (and failing and trying again) there aren't a lot of books that describe how to build large, complete applications that support the entire software development lifecycle.

What books have you read that answer these So What, Now What, and What For questions?

Related Links

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

ALT.NET Geek Code: Should you care about these ALT.NET guys?

April 23, '08 Comments [19] Posted in Programming
Sponsored By

I was at the ALT.NET Open Spaces Conference in Seattle last week. I LOVE Open Spaces conferences, my first was Foo Camp, and I've been to two ALT.NET Conferences. When Open Spaces is done right, it's a brilliant self-organizing thing of beauty. If you want to hire an Open Spaces Facilitator for your own conference, you might consider Steven "Doc" List. He did a fantastic job keeping us on track and keeping the energy level both high and positive.

These are the four Open Spaces principles:

  • Whoever comes are the right people
  • Whatever happens is the only thing that could have
  • Whenever it starts is the right time
  • When it's over, it's over

if it sounds a little meta-physical, it is, but it works. Anyone can convene a space/talk and anyone can participate. It's less about personality and power and more about people and discussion and it matches the ALT.NET way of thinking perfectly.

Dave Laribee coined the term ALT.NET last year. I did a podcast on ALT.NET with Dave last month. A few weeks before this year's conference even began, Jeremy Miller blogged about the need for ALT.NET. In a nutshell he said:

Specifically, I’d like to see ALT.NET start to fill what I see as a void for information and leadership on:

  • OOP fundamentals. I think we as a community would achieve far more benefits on our projects from a stronger base of OOP design knowledge than we do for learning new API’s like WCF or LINQ. Learning the fundamentals will make us far better able to choose and use these new API’s.
  • Development processes and practices. Microsoft builds and sells tools, and they naturally see tools as the solution to most problems. We the community should be completely in charge of determining our best practices because we are the ones out there building software. Go to a .Net centric development conference. Outside of some token Agile track, you will not see very much about configuration management, project management, requirements, testing, or the all important people issues. A lot of Linq to XYZ talks, but not much about running successful software projects.
  • Alternative tools. Some of the tools Microsoft makes are great, but some are inferior to non-Microsoft options. Some of these alternative tools open up entirely new possibilities in how we can work.  I’d like to see the entire .Net community to simply broaden its horizons to be more accepting of tools originating from outside of Redmond.

I think he’s right on. Not everything MSFT does is the “last word,” Microsoft can do better on prescriptive guidance and process, and many .NET programmers could do to refresh their knowledge of Computer Science 101 Fundamentals, myself included.

Also, ALT.NET isn't about Microsoft vs. The World, as Ayende pointed out last year:

Saying things like "An ALT.NET developer would be using Castle Windsor before Enterprise Libraries ObjectBuilder.", or "An ALT.NET developer was using NHibernate before the Entity Framework." is giving the wrong impression. It gives the impression of you are either with us (good) and against us (bad). And you must follow Our (notice the royality speak) way and no other.

The other objection is that it is focusing on tools and not on a mind set. The way I see it, this is much more about keeping your head open to new approach and ideas, regardless of where they come from. In short, I really like the ideas and concepts that Dave presents, I don't want the idea to turn into "A .NET developers that seeks to use non Microsoft technologies." I would much rather it be "A developer that seeks to find the best tools and practices, and judge them on merit."

I think some folks at Microsoft perceive the ALT.NET crowd as being loud, small, divisive, or all of the above. Deep passion and belief can sometimes be perceived as loud or argumentative. I think a better way to put it would be "pragmatic." More and more MSFTies, ScottGu included, get this. I enjoy being a part of this group.

ALT.NET is about picking processes and tools that work for you and make you happy, picking the best one or a number of them, and using them all together, so I took a moment this afternoon and whipped up an idea that a bunch of us at the conference had late one night: The ALT.NET Geek Code.

Here's mine (note the DIV tag and CLASS in the source. Any designers want to whip up a nice CSS box, with a small ALT.NET Logo? Thanks to Simone for the nice CSS and layout and Adam Kinney for the Silverlight Badge!)

Get Microsoft Silverlight

What's it mean? Well, it's a code that describes the processes and tools that I like to use to develop (as of this moment...these things tend to evolve.) You can get your own ALT.NET Geek Code and post it on your blog if you like. Perhaps we'll be able to use Google to scrape the Interwebs and do a distributed ALT.NET Tools Survey someday. ;)

Should you care about ALT.NET? Even if you aren't into the cult of personality, you can definitely learn something from getting involved in the conversation.

UPDATE: I also want to point out that I'm intending to use the term "Guys" in this post's title in the gender-neutral usage. I am acutely aware of the need for more women in this field, so much so that I convened a talk at this year's ALT.NET Open Spaces Conference called "What's with all these White Guys" which was attended by the both the women who came to the conference this year.

Related Links

Technorati Tags: ,

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

The Weekly Source Code 24 - Extensibility Edition - PlugIns, Providers, Attributes, AddIns and Modules in .NET

April 18, '08 Comments [17] Posted in ASP.NET | ASP.NET MVC | Learning .NET | Programming | Source Code | VB
Sponsored By

I've been getting more and more interested in how folks extend their applications using plugins and things. In my new ongoing quest to read source code to be a better developer, Dear Reader, I present to you twenty-fourth in a infinite number of posts of "The Weekly Source Code."

There's a lot of ways to "extend" an application or framework. Probably dozens of patterns. You can learn about the Gang of Four Patterns using C# over here.

I was looking and three chunks of code this week that extend things or are extensible. The first was xUnit.NET, the new Unit Testing Framework on the block (until I finally go File|New Project and bang out "HanselTest2000" ;) ) and the second was some source that Miguel Castro gave out at the March 2008 CINNUG meeting called Sexy Extensibility Patterns. Miguel has a Code Generation/Data Mapping tool called CodeBreeze that uses this patterns. It was on DNRTV a while back. The third was the plugin design of Windows Live Writer and the WLW SDK.

This post isn't trying to be an exhaustive list of anything, it's just some cool code that's got me thinking about interesting was to extend stuff. I like these three examples because each has more than one way to extend it.

Extending software means adding functionality that wasn't there to start with. There's MANY ways to do it though. For example, adding a script engine like VBA or PowerShell and hosting script would be one way. Making a public scripting-friendly API is a twist on that theme. Hosting AddIns or Plugins via deriving from base classes, implementing interfaces or sourcing events. using System.AddIn, is another.  Using a Dependency Injection Container is a more advanced and powerful way to extend applications.

xUnit.NET

image xUnit.NET is a .NET Unit Testing Framework from Brad Wilson and Jim Newkirk (formerly of NUnit fame). Initial reaction to their framework was a resounding "meh" as folks asked "seriously, do we NEED another Unit Testing Framework?" but they soldiered on, and just like the little Mock Framework that could, they're starting to get the respect they deserve. It's got an MSBuild task, Resharper and TestDriven.NET Test Runner support, and most importantly, the framework has some interesting extensibility points.

The xUnit.NET source code is exceedingly tidy. I mean that as a total complement, like when you visit someone's house and you find yourself asking "who is your decorator?" while simultaneously realizing that they are just THAT tidy and organized.

ASIDE: Not enough people use "Solution Folders" in Visual Studio. Seriously, folks, just right-click and "Add | New Solution Folder," start dragging things around and bask in the tidiness.

They've separately any controversial (my word) static extension methods into separate projects, xunitext and xunitext35 that includes .NET 3.5-specific features. So, certainly Extension Methods are a coarse extensibility point for developers "downstream" from your framework.

They use Attributes as MAJOR way to extend their framework. For example, say you want something to happen before and/or after a test.

namespace XunitExt
{
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class TraceAttribute : BeforeAfterTestAttribute
public override void Before(MethodInfo methodUnderTest)
{
Trace.WriteLine(String.Format("Before : {0}.{1}", methodUnderTest.DeclaringType.FullName, methodUnderTest.Name));
}

public override void After(MethodInfo methodUnderTest)
{
Trace.WriteLine(String.Format("After : {0}.{1}", methodUnderTest.DeclaringType.FullName, methodUnderTest.Name));
}
}
}

You just derive a new attribute from BeforeAfterTestAttribute, and put it on a test. The [Fact] attribute is a standard one. They use it instead of [Test]:

[Fact]
[Trace]
public void CanSearchForSubstrings()
{
Assert.Contains("wor", "Hello, world!");
}

Now, my code in Before() and After() above will execute for this test.

If you've got your own Test Framework but you want to move towards using xUnit.NET, they've got a [RunWith] attribute, where you can make your own Test Runner and say, for example, "run this test with nUnit." You can use it two ways, either as [RunWith(typeof(MyRunner)] or with a custom attribute like:

public class RunWithNUnitAttribute : RunWithAttribute
{
public RunWithNUnitAttribute()
: base(typeof(NUnitCommand)) {}
}

...where nUnitCommand implements a very clean interface called ITestCommand that "describes the ability to executes all the tests in a test class."

public interface ITestClassCommand
{
object ObjectUnderTest { get; }
ITypeInfo TypeUnderTest { get; set; }
int ChooseNextTest(ICollection testsLeftToRun);
Exception ClassFinish();
Exception ClassStart();
IEnumerable EnumerateTestCommands(IMethodInfo testMethod);
IEnumerable EnumerateTestMethods();
bool IsTestMethod(IMethodInfo testMethod);
}

I think this is a very elegant interface. The framework "eats its own dogfood" also, which means that the xUnit.NET guys have factored things appropriately (as they should have) such that they are using their own extensibility points. Using your own framework to build your stuff is the only way you'll know if you've got a great framework and it's the best way to find out what's NOT working.

The NUnitCommand implementation then uses the NUnit SDK/API to run test. This allows you to move over to xUnit.NET, if you like, from NUnit or another test framework a little at a time, while running all the tests under the same runner.

Another way to extend your code is through the use of existing well-known interfaces like IComparer<T>, for example. They're a test framework, so they're full of Asserts like Assert.Equal and Assert.Contains. The Assert.InRange has an overload, seen below, than can take an IComparer as an optional parameter.

public void InRange<T>(T actual,
T low,
T high)
{
Assert.InRange(actual, low, high);
}
public void InRange<T>(T actual,
T low,
T high,
IComparer comparer)
{
Assert.InRange(actual, low, high, comparer);
}

This might seem obvious to some, but it's thoughtfully obvious and a clean extensibility point. And thoughtfully obvious is easy to say and hard to do. I'm thinking that Moq and xUnit.NET just may be the new peas and carrots - they go together nicely and each have the similar goals.

Sexy Extensibility Patterns

You know someone likes their job and coding when they describe a pattern as "sexy" and not just "handsome" or "sassy." Miguel includes his slide deck and code in both C# and VB up on the CINNUG website.

Miguel calls out three main extensibility types that he uses (from his presentation):

  • Providers - Allow abstraction for data and behavior
  • Plugins- Adding new behavior
  • Modules - Centralize plug-in functionality and enforce manageable standards

Providers are based on the Strategy Design Pattern and look like this. You'll see stuff like this all throughout ASP.NET 2.0. A controller or context class needs a way (hence "strategy" to do something). Classically an instance of a class that provides the needed strategy is passed in, but nowadays it could come from an IOC framework, or just a config file.

They're often very simple and they're easy to write. Make an interface that provides something, usually data:

namespace Core
{
public interface IDataProvider
{
string GetSource();
string GetData(string source);
void LogData(string data);
}
}

Then use it. This is a simple example, but usually this would be buried in a ProviderFactory that would hide the config and activation, and even hold the new instance:

public void ProcessData()
{
string s_Provider = ConfigurationManager.AppSettings["dataProvider"];

Object o = Activator.CreateInstance(Type.GetType(s_Provider));

IDataProvider o_Provider = o as IDataProvider;
string s_Source = o_Provider.GetSource();

string s_Data = o_Provider.GetData(s_Source);

if (s_Data != "")
{
o_Provider.LogData(s_Data);
}
}

Aside: Personally I find the quasi-Hungarian naming in Miguel's source off-putting, but I know him well enough to say that. ;)

So, Providers provide stuff, and Plug-Ins add functionality, like:

public class ArchiveProcessing : IPostLogPlugin
{
void IPostLogPlugin.PerformProcess(string data)
{
// take data and archive it
}
}
public class ArchiveProcessing : IPostLogPlugin
{
void IPostLogPlugin.PerformProcess(string data)
{
// take data and archive it
}
}

You might have a bunch of plugins added to your config file and then call them synchronously, in this example, perhaps as a postProcessing step to a provider:

object section =  ConfigurationManager.GetSection("postProcessing");
List<plugininfo> o_Plugins = section as List;
foreach (PluginInfo o_PluginInfo in o_Plugins)
{
object o = Activator.CreateInstance(Type.GetType(o_PluginInfo.PluginType));
IPostLogPlugin o_Plugin = o as IPostLogPlugin;
o_Plugin.PerformProcess(s_Data);
}

Modules, in Miguel's parlance, are like Filters that get involved in a process and various points (like HttpModules) and change functionality. First you define some events:

public delegate void AcmeModuleDelegate<t>(T e);
public class ModuleEvents
{
public AcmeModuleDelegate<checkdatasourceeventargs> CheckDataSource { get; set; }
public AcmeModuleDelegate<preprocessdataeventargs> PreProcessData { get; set; }
public AcmeModuleDelegate<postprocessdataeventargs> PostProcessData { get; set; }
}

You might add a bunch of modules to your application and have them listen in at these three or more "events." Then you need to ask yourself, do these things fire in order and more importantly, can a module cancel the process? To make this happen, modules would have to each consciously respect the cancel boolean and that's not really enforceable.

A Module is passed a ModuleEvents in this example so they can hook up to the shared delegate. This is called a multi-cast delegate because if I call it, EVERYONE gets called:

public interface IAcmeModule
{
void Initialize(ModuleEvents events);
}

public class ArchiveModule : IAcmeModule
{
void IAcmeModule.Initialize(ModuleEvents events)
{
events.PostProcessData += events_PostProcessData;
}

void events_PostProcessData(PostProcessDataEventArgs e)
{
// perform archive functionality with processed data
}
}

You'd call the multicast-delagate like this

CheckDataSourceEventArgs o_EventArgs =  new CheckDataSourceEventArgs(s_Source);
o_FilterEvents.CheckDataSource.Invoke(o_EventArgs);

Or, you could spin through the delegates and invoke them yourself, checking the cancel flag and stopping the whole thing yourself. Miguel's got a nice sample app and PPT with lots more code that illustrates this point well.

Then you can take all these concepts and put them together into a single extensible app with providers, plugins, modules that all work together. I like to keep all my Interfaces separated in another assembly and version them slowly, only when the contract changes. 

Windows Live Writer

Windows Live Writer is what I use to post to my blog. I'm typing in it right now. If you're using an admin web page to post to your blog, stop. Go download it now, I'll wait here. Ok, it's got three kinds of extensibility (from MSDN):

  • Application API, for launching Writer to create new posts or "Blog This" items for links, snippets, images, and feed items.
  • Content Source Plugin API, for extending the capabilities of Writer to insert, edit, and publish new types of content.
  • Provider Customization API, for customizing both the capabilities of Writer as well as adding new capabilities to the Writer user interface.

Let's handle the last bullet, first, the Provider Customization API. Extensibility can also mean extending an application without using code at all. Check out how to extend the Windows Live Writer UI using only an XML file for DasBlog. No code was required. Sometimes, just a thoughtful XML file provides the extensibility points you need.

The Content Source Plugin API is a slick thing, letting you add your own "Insert" commands directly to WLW. There's 85 plugins at the time of this writing in the gallery.

This time last year I took Travis's CueCat and make a CueCat Windows Live Writer plugin so I could more quickly post my Monthly Reading List (which tragically, ended up being yearly out of laziness. Note to self: Self, post a monthly reading list.)

Anyway, I wrote an article for Coding4Fun and blogged as well. The Plugin looked like this:

Windows Live Writer has a cool model for Plugins. There's the potential for adding lots of functionality. You're adding a link and picture to the main UI, an undefined number of forms, plus you'll need storage, and you'll want to insert HTML into the main Editor window. This could potentially be a complex plugin model, but Joe Cheng and the others on the team made it, IMHO, very easy.

How does the plugin model for WLW enable all this? The code for a plugin speaks volumes:

using System;
using System.Collections.Generic;
using System.Windows.Forms;
using WindowsLive.Writer.Api;

namespace AmazonLiveWriterPlugin
{
[WriterPlugin("605EEA63-B74B-4e9d-A290-F5E9E8229FC1", "Amazon Links with CueCat",
ImagePath = "Images.CueCat.png",
PublisherUrl = "http://www.hanselman.com",
Description = "Amazon Links with a CueCat.")]
[InsertableContentSource("Amazon Links")]
public class Plugin : ContentSource
{
public override System.Windows.Forms.DialogResult CreateContent(System.Windows.Forms.IWin32Window dialogOwner, ref string newContent)
{
using(InsertForm form = new InsertForm())
{
form.AmazonAssociatesID = this.Options[AMAZONASSOCIATESID];
form.AmazonWebServicesID = this.Options[AMAZONWEBSERVICESID];
DialogResult result = form.ShowDialog();
if (result == DialogResult.OK)
{
this.Options[AMAZONASSOCIATESID] = form.AmazonAssociatesID;
this.Options[AMAZONWEBSERVICESID] = form.AmazonWebServicesID;
Product p = Decoder.Decode(form.CueCatData);
AmazonBook book = AmazonBookPopulator.CreateAmazonProduct(p, form.AmazonWebServicesID);
string associatesId = form.AmazonAssociatesID.Trim();
string builtAmazonUrl = removed for cleanliness";
newContent = string.Format(builtAmazonUrl, book.ID, associatesId, book.Title, book.Author);
}
return result;
}
}
}
}

Their plugin model uses a combination of things. There's attributes on the class that give details on where images are (embedded as resources), text, URLs and a GUID for uniqueness. There's a base class that makes a "Hello World" plugin a one-line affair. And there's a clean way to show any WinForm, show it and tell WLW what the result was. The HTML is returned by reference in newContent while the return value was a standard DialogResult. If you need storage for state, they pass in a simple dictionary-like Interface to your plugin when you're initialized. As a plugin, you don't need to sweat storage, or how you appear in the list of plugins, you just focus on your main dialog.

Do you have any cool examples of elegant, cool, convenient, clever extensibility mechanisms?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Hanselminutes Podcast 108 - Exploring Distributed Source Control with Git

April 16, '08 Comments [18] Posted in Podcast
Sponsored By

git-logo My one-hundred-and-eighth podcast is up. In this episode I sit down with Robby, Gary and Andy from Planet Argon, a local Rails shop in Portland, OR, and talk about their experience as they move from Subversion to Git for their source control.

Links from the Show

Subscribe: Subscribe to Hanselminutes Subscribe to my Podcast in iTunes

If you have trouble downloading, or your download is slow, do try the torrent with Āµtorrent or another BitTorrent Downloader.

Do also remember the complete archives are always up and they have PDF Transcripts, a little known feature that show up a few weeks after each show.

Telerik is our sponsor for this show.

Check out their UI Suite of controls for ASP.NET. It's very hardcore stuff. One of the things I appreciate about Telerik is their commitment to completeness. For example, they have a page about their Right-to-Left support while some vendors have zero support, or don't bother testing. They also are committed to XHTML compliance and publish their roadmap. It's nice when your controls vendor is very transparent.

As I've said before this show comes to you with the audio expertise and stewardship of Carl Franklin. The name comes from Travis Illig, but the goal of the show is simple. Avoid wasting the listener's time. (and make the commute less boring)

Enjoy. Who knows what'll happen in the next show?

Technorati Tags:

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.