Scott Hanselman

Managed Snobism

March 13, 2007 Comment on this post [29] Posted in Learning .NET | Musings | Programming
Sponsored By

There was a great comment on a recent post here last week where I was trying to get a Managed Plugin working with an application that insisted on its plugins being C++ DLLs with specific virtual methods implemented.

Here's the comment with my emphasis

Great article. It would be nice to see a little less "managed snobism". Personally, I don't need to use up 100MB of my memory with a framework just to let me generate 32x32 bitmaps. So I'm grateful that the managed route is not the default. Remember, 90% of functions in .NET are just wrappers around the underlying API functions, so in effect, all they do is slow you down, while giving you convenience.

This reminded me of an article I did a few years ago when folks were still asking silly questions like "Is your application Pure .NET?" The article was called The Myth of .NET Purity and was published up on MSDN under an article series called ".NET in the Real World." To this day I'm still surprised that they let me publish it.

The (interestingly anonymous) commenter says: " in effect, all they do is slow you down, while giving you convenience." Well, sure. Everyone knows this quote:

Any problem in Computer Science can be solved with another layer of indirection.

It's a great quote. As an aside, the quote is attributed to nearly every smart Computer Scientist. Including David Wheeler, Butler Lampson and Steven M. Bellovin. Lampson says it was Wheeler, but it was one of these three guys.

But a game developer at Sun adds a clever touché to the old adage:

The two software problems that can never be solved by adding another layer of indirection are that of providing adequate performance or minimal resource usage. - Jeff Kesselman

And he's right. Of course, .NET is a (most excellent layer of) Managed Spackle over the Win32 API. But it's really GOOD spackle. It's so good that we get collectively frustrated when a new API (SideShow, AzMan) doesn't have a good initial managed API (SideShow does now). A nice, clean managed API adds a fantastic amount of convenience in exchange for a very reasonable performance hit.

The performance hit - which I haven't personally measured - is no doubt less than even the most trivial of network calls. How much overhead is added? Not much.

Approximate overhead for a platform invoke call: 10 machine instructions (on an x86 processor)

Approximate overhead for a COM interop call: 50 machine instructions (on an x86 processor)

Gosh, that isn't much. Sure, there are always scenarios we conceive of that could add up, but that's what profiling on a case-by-case basis is for.

If .NET Purity is a myth, and the whole thing is just there to make our lives easier, then this is an easy trade off. I just remember that I can code in C#/VB.NET, for a small cost. I get speed of dev, and I give up speed of execution. I can code in C++, and give up speed of dev (a smidge) and gain (possibly) speed of execution. I can code in ASM and give up lots of productivity in exchange for my immortal soul and a really fast program. Or I can go to heaven, pursue beauty if I like, and give up so much performance to cause a scandal.

But it's not a simple trade off. Certainly not at the method or even component level. William Caputo makes a similar point with emphasis mine:

...this calculation is done unconsciously by those programmers who hear "we're trading efficiency for productivity." It's why they are reluctant to take a serious look at higher-level languages. A one-time productivity hit to get faster run-time performance certainly seems like a good trade-off, but the flaw in the argument is that productivity measurement is not reset with each task. It's cumulative. Unlike a programming assignment ("Implement Quick Sort Please"), productivity is measured across an entire solution (whether a build script or a trading system) -- and not just the first writing of the code, but throughout its useful lifetime (the vast majority of coding time is spent changing, or maintaining existing code).

In the real world, its not "write once, run forever", its "write a bit, run a bit, change a bit, run a bit", and so on. I am not saying that run-time efficiency isn't important. It is. The best way to compare run-time efficiency and programmer productivity is not at the micro level, but at the macro level.

Yes, .NET adds overhead. Certainly not enough to worry about for business apps, given the productivity gains. We're not writing device drivers here. In my original example where I want to write managed plugins for the Optimus Keyboard, since the max frames per second on the keyboard is 3fps, performance isn't a concern (nor would it be even if I needed 30fps).

If being a Managed Code Snob is wrong, I don't wanna be right.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service
March 13, 2007 15:44
Cut right to the chase!

This meets my argumentation if I try to convince a potential customer of the choice for .NET. "No, it's not really slower for your intended purposes - and we will gain a huge speed-up on development!"
March 13, 2007 16:56
Trading efficiency for expressiveness is a good bet.

Reg Braithwaite has a post about a related idea:

Factoring (or decomposing) your program lets you separate its concerns, making your program better (maintainable, understandable, flexible). Languages with more ways to factor your program, to separate its concerns, are more powerful (more expressive) than those with fewer.
March 13, 2007 17:08
Idealism has no reign on the capitalist society that governs the business world. Inevitably, it's the business world that powers our homes and helps feed our, in most cases it comes down to the bottom line.

Here's a great illustration. A handyman has various tools, you wouldn't use a screwdriver to drive a nail into the wall, right? Likewise, you wouldn't use a hammer to wipe your mirrors clean, right? Every situation has it's perfect toolset.

And we just happen to be business app bring on the coding efficiencies...bring on .NET.
March 13, 2007 17:14
I think the performance overhead for managed code is overrated, just look at the XNA video at channel9 where they show a 3D car game running on .NET in the highest HD resolution 1080p. Coming from C++ I have had a prejudice against java/.NET performance but having spent the last 2 years working with .NET/C# I am more and more impressed with the performance of .NET.

March 13, 2007 17:21
Sounds kinda like Unmanaged Snobism...
March 13, 2007 17:27
Rico Mariani and Raymond Chen had a (small) C# vs C++ competition back in 05. Without serious optimization of the C++ code, C# was the winner. You can read about it a@ .
March 13, 2007 18:32

Reminds me of a conversation I had not a week ago with a friend of mine (he's got about 3 years experience) who I recently turned on to O/R mapping (specifically, WilsonORMapper); he kept bringing up performance as an issue and it was driving me insane because he isn't writing an application where it was going to be an issue (he's working on a WinForms app that has SQL Express as its database, deployed to one machine, with the possibility that other instances of the app might access a DB on another machine on the network). I asked him what made him think performance would be an issue and he didn't seem to have a good reason, I don't know what experiences have led him to believe it was going to be an issue.

Anyway, one other reason to write 'pure .NET' when possible is hope that you might run the same app, hopefully with few changes, against Mono; the more you mix non-.NET code with .NET code, the harder it would be to do so. Mono is continuing to mature and I would hate to lock myself out of the possibility of having my code work with it.
March 13, 2007 18:39
The maintenance issue is one I've tried to reinforce with people I mentored or managed by telling them our job as software developers is to write as little code as possible.
March 13, 2007 21:46
.NET is for chickenshit programmers.
March 13, 2007 21:57
Like naysayers of any topic, anyone who detracts .NET in a violent manner, or a dismissive manner, is clearly unfamiliar with the fundamentals, let alone actual day to day use of it.

I started out programming in assembly 25 years ago. And there were times when I would still drop to it for specific things from C, and later C++, but those few things got increasingly rare. With .NET I wouldn't even dream of dropping to C or C++ to get something done with less resources or more quickly, because there are actually cases were managed code is *faster* than unmanaged C++ code (mainly do to with resource allocation and object retrieval times).

.NET is a tool. And it is a poor craftsman who blames their tools...
March 13, 2007 23:15
don't know. I still have time to go make a sandwich everytime I compile and run my ASP.NET app. . . . .even besides all the tricks that Scott Gu et al blog about
March 13, 2007 23:56
Great post. I'm a recoving C++ programmer, been coding since the 8 bit days and the last bit of large scale C++ I was involved in was porting our application to 64 bit Windows. Far from finding .NET (or more generally managed code) to be a dumbing down (as I know some C++ devs view it) - rather I find it gives me additional space to think.

You're absolutely right, the performance of a system should include the time it takes to enhance it between versions - programmer productivity most certainly should be included in the perf. calculation.
March 14, 2007 0:57
Sandwich maker...

Feed the hamster in your'll compile faster.

March 14, 2007 13:06
What is it that makes huge populations of developers think they're working on a Ferrari when their app is really just a Pinto?

"I'm writing a web app that pulls data from a database and puts it on a web page. I never use 'foreach' because I heard it's slower than explicitely iterating a for loop."

Wake up!
March 14, 2007 18:02
In the real world, where we're trying to solve real business problems for customers, .NET has proven an invaluable layer of indirection/abstraction that has contributed to enhancing the average developer's productivity.

Not having to worry about pulling in arcane headers and libraries to do something as simple as constructing a stack or queue or god forbid do some memory streaming without crashing the app because you made an error clearing out some pointers is a B-L-E-S-S-I-N-G.

All this alpha-male chest thumping about "real" programmers doing it in C or C++ is such hogwash. If you really want to thump your chest, do it in assembly and be just as efficient and productive and performant - dare ya.

Until then, I'm with the .NET Snobs. Scott: Get some t-shirts made up!
March 14, 2007 21:05
If we're looking at programmer productivity in the macro sense, then we have to consider the cost of rewriting code in the event that the language vendor decides to curtail support of the language.

Eventually Microsoft's language and framework designers will hold C# and .NET in the same regard as they now hold Classic VB. I'm all in favor of developing new programming paradigms, but when a vendor suddenly withdraws support for the current paradigm in order to accelerate migration to the new paradigm, that has a significant effect on productivity.

I developed 200K+ lines of VB code and 100K lines of C++ code over an 8 year period. Now I am porting that all of that VB code to C++. I'd much rather be porting it to C#. But I fear that MS will someday do to C# what they have done to VB. I can't afford to rewrite that code again if MS decides to withdraw support for C#.

BTW, I just read that Visual FoxPro is going to be supported until 2015. If they can support FoxPro, which is much older, then they could certainly support Classic VB.

As I said, I'd rather be using C#. Programming and debugging a large system in C++ is painfully slow in comparison. But at least I have a reasonable degree of certainty that the language won't be abandoned any time soon.
March 14, 2007 21:19
When was the last time with the machines we're using now that you noticed 5 machine instructions vs. 10? I know that's throwing hardware at the problem but the advantages .net gives you over C/C++/ASM are HUGE, and the code is tested, and compiled to run almost at the native level (albiet in IL). Agreed, 100mb of memory overhead to load the framework is a lot, but since most programs are going to use it (Especially under vista) then I'd say you don't have much more time to worry about this. :-)
March 14, 2007 22:19
Looks like a slide from your Dirty Soap presentation, no? :)
March 15, 2007 6:40
Scott: Is there a way to read your archives in reverse chronological order?
March 15, 2007 23:35
This performance thing is quite an interesting notion as this isn't a question of wither the application will finish the task, but finish it in a timeframe that is comfortable to our perception of time.

I haven't ever had a performance issue in .Net and when I have benchmarked anything I have always been surprised at how fast things have run (my thinking on this is that the compilation from IL to native is very good). Yet when you look at all of programming and where people are having performance issues today (like on the web) .Net is outside of that and in a lot of ways actually solves performance issues the rest can be left upto Moore's Law to fix.
March 16, 2007 17:09
There are two performance aspects of .net. The first is the framework/IDE. They both really help in developer productivity.(.net also provides a better solution to language neutrality than COM does.) The thing is, though, that there's no reason that they have to be tied to IL. Delphi's got a pretty good framework and IDE and it's primarily native code.
The second is tied to IL and the JITter. I'm still not convinced of the quality of the code generated by the JITter, and it's hard to get away from. With native code, it's relatively inexpensive to drop down to assembly for a few functions, and I've seen enough people get decent performance boosts by doing so that I'd like to preserve the option. Yes, you can do something similar in .net, but it's harder, and the cost of thunking to native code makes the payoff a lot less. Also, despite the garbage collector, I still find myself managing objects a lot to get decent performance. (Hello IDisposable!) There's also the problem with plugins using different versions of the framework. It's bad enough that MS has said that you shouldn't use .net for Office or shell plug-ins.
March 16, 2007 21:54
Mike - There's actually explicit support for managed plugins on Office, and they are now the preferred method of extending office (using CLR 2.0 only, of course). They're called VSTO.
March 16, 2007 22:32
William Caputo is right on the money about productivity. Many developers I know forget about the cumulative effect that goes into productivity, since most often code gets changed and fixed all the time to accommodate new features. Thanks Scott for posting this. Now I can preach his insight to my fellow devs.
March 16, 2007 22:46
The 3 fundamental computer instructions: AND, NOT, GOTO
Who could ever need more?
March 18, 2007 16:43

A commenter said "I think the performance overhead for managed code is overrated, just look at the XNA video at channel9 where they show a 3D car game running on .NET in the highest HD resolution 1080p."

Actually, the CPU does nothing here. Most of the time is spent rendering the scene, which is done by the graphics card. You could write this game in Javascript (an interpreted language) and end up with the same speed.

If you really want to compare .NET and no .NET, try to do something that does something real, such as image processing.

March 18, 2007 16:45
Scott said "

Approximate overhead for a platform invoke call: 10 machine instructions (on an x86 processor)

Approximate overhead for a COM interop call: 50 machine instructions (on an x86 processor)"

This is obviously not true. Marshalling structures alone goes way way over thousand CPU instructions. And you have to add to this all the stack walking in the CLR.

March 19, 2007 20:38
Stef - Those overhead numbers come direct from an MSDN article...certainly MSDN isn't perfect. Do you have a whitepaper or information on why it's "over a thousand" CPU instructions?
March 20, 2007 20:55
"If you really want to compare .NET and no .NET, try to do something that does something real, such as image processing."

Good point, compare Paint Shop Pro to Paint.NET. I haven't noticed any difference in performance when using them.
April 03, 2007 20:01
Having worked with and taught Java, VB, and C# for several years I can say that business users don't care one bit what's under the hood - they just want it to work the way they envision it should work, and they want it to look good along the way.

Java isn't there yet with GUI stuff - as humans we're just too adapt at noticing discrepancies between the real thing and an attempt to copy the real thing. Look at the way we judge each other's personal appearances - a nose (like mine) extended just a few millimeters too far, or a chin arched just a couple millimeters in either direction and we notice it right away. Same thing with software user interfaces.

Getting back on track with my comment... C# (or VB.NET) is spot on the mark with the ability to easily create attractive UIs in a short period of time. And business users don't care how many processing instructions are going on being the scenes - they just like that it looks great and didn't take forever to build.

And as a consultant I like that my project manager thinks I'm a genious because I can whip out a nice looking user interface in a very short period of time, using .NET.

Comments are closed.

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.