Scott Hanselman

Back To Basics: You aren't smarter than the compiler. (plus fun with Microbenchmarks)

June 26, 2012 Comment on this post [36] Posted in Back to Basics
Sponsored By

Microbenchmarks are evil. Ya, I said it. Folks spend hours in tight loops measuring things trying to find out the "best way" to do something and forget that while they are changing 5ms between two techniques they've missed the 300ms Database Call or the looming N+1 selects issue that has their ORM quietly making even more database calls.

My friend Sam Saffron says we should "take global approach to optimizations." Sam cautions us to avoid trying to be too clever.

// You think you're slick. You're not.
// faster than .Count()? Stop being clever.
var count = (stuff as ICollection<int>).Count;

All that said, let's argue microbenchmark, shall we? ;)

I did a blog post a few months back called "Back to Basics: Moving beyond for, if and switch" and as with all blog posts where one makes a few declarative statement (or shows ANY code at all, for that matter) it inspired some spirited comments. The best of them was from Chris Rigter in defense of LINQ.

I started the post by showing this little bit of counting code:

var biggerThan10 = new List;
for (int i = 0; i < array.Length; i++){
if (array [i] > 10)
biggerThan10.Add (array[i]);
}

and then changed it into LINQ which can be either of these one liners

var a = from x in array where x > 10 select x; 
var b = array.Where(x => x > 10);

and a few questions came up like this one from Teusje:

"does rewriting code to one line make your code faster or slower or is it not worth talking about these nanoseconds?"

The short answer is, measure it. The longer answer is measure it yourself. You have the power to profile your code. If you don't know what's happening, profile it. There's some interesting discussion on benchmarking small code samples over on this StackOverflow question.

Now, with all kinds of code like this folks go and do microbenchmarks. This usually means doing something trivial a million times in a tight loop. That's lots of fun and I'm doing to do JUST that very thing right now with Chris's good work, but it's important to remember that your code is usually NOT doing something trivial a million times in a tight loop. Unless it is.

Knaģis says:

"Unfortunately LINQ has now created a whole generation of coders who completely ignores any perception of writing performant code. for/if are compiled into nice machine code, whereas .Where() creates instances of enumerator class and then iterates through that instance using MoveNext method...Please, please do not advocate for using LINQ to produce shorter, nicer to read etc. code unless it is accompanied by warning that it affects performance"

I think that LINQ above could probably be replaced with "datagrids" or "pants" or "google" or any number of conveniences but I get the point. Some code is shown in the comments where LINQ appears to be 10x slower. I can't reproduce his result.

Let's take Chris's comment and deconstruct it. First, taking an enumerable Range as an array and spinning through it.

var enumerable = Enumerable.Range(0, 9999999);
var sw = new Stopwatch();
int c = 0;

// approach 1

sw.Start();
var array = enumerable.ToArray();
for (int i = 0; i < array.Length; i++)
{
if (array[i] > 10)
c++;
}
sw.Stop();
Console.WriteLine("Enumerable.ToArray()");
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

The "ToArray()" part takes 123ms and the for loop takes 9ms on my system. Arrays are super fast.

Starting from the enumerable itself (not the array!) we can try the Count() one liner:

// approach 2
Console.WriteLine("Enumerable.Count()");
sw.Restart();
c = enumerable.Count(x => x > 10);
sw.Stop();
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

It takes 86ms.

I can try it easily in Parallel over 12 processors but it's not a large enough sample nor is it doing enough work to justify the overhead.

// approach 3
Console.WriteLine("Enumerable.AsParallel() (12 procs)");
sw.Restart();
c = enumerable.AsParallel().Where(x => x > 10).Count();
sw.Stop();
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

It adds overhead and takes 129ms. However, you see how easy it was to try a naïve parallel loop in this case. Now you know how to try it (and measure it!) in your own tests.

Next, let's do something stupid and tell LINQ that everything is an object so we are forced to do a bunch of extra work. You'd be surprised (or maybe you wouldn't) how often you find code like this in production. This is an example of coercing types back and forth and as you can see, you'll pay the price if you're not paying attention. It always seems like a good idea at the time, doesn't it?

//Approach 4 - Type Checking?
Console.WriteLine("Enumerable.OfType(object) ");
var objectEnum = enumerable.OfType<object>().Concat(new[] { "Hello" });
sw.Start();
var objectArray = objectEnum.ToArray();
for (int i = 0; i < objectArray.Length; i++)
{
int outVal;
var isInt = int.TryParse(objectArray[i].ToString(), out outVal);
if (isInt && Convert.ToInt32(objectArray[i]) > 10)
c++;
}
sw.Stop();
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

That whole thing cost over 4 seconds. 4146ms in fact. Avoid conversions. Tell the compiler as much as you can up front so it can be more efficient, right?

What if we enumerate over the types with a little hint of extra information?

// approach 5
Console.WriteLine("Enumerable.OfType(int) ");
sw.Restart();
c = enumerable.OfType<int>().Count(x => x > 10);
sw.Stop();
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

Nope, the type check wasn't necessarily in this case. It took 230ms and added overhead. What if this was parallel?

// approach 6
Console.WriteLine("Enumerable.AsParallel().OfType(int) ");
sw.Restart();
c = enumerable.AsParallel().OfType<int>().Where(x => x > 10).Count();
sw.Stop();
Console.WriteLine(c.Dump());
Console.WriteLine(sw.ElapsedMilliseconds.Dump());

That's 208ms, consistently. Slightly faster, but ultimately I shouldn't be doing unnecessary work.

In this simple example of looping over something simple, my best bet turned out to be either the Array (super fast if it was an Array to start) or a simple Count() with LINQ. I measured, so I would know what was happening, but in this case the simplest thing also performed the best.

What's the moral of this story? Measure and profile and make a good judgment. Microbenchmarks are fun and ALWAYS good for an argument but ultimately they exists only so you can know your options, try a few, and pick the one that does the least work. More often than not (not always, but usually) the compiler creators aren't idiots and more often than not the simplest syntax will the best one for you.

Network access, database access, unnecessary serializations, unneeded marshaling, boxing and unboxing, type coercion - these things all take up time. Avoid doing them and when do you do them, don't just know why you're doing them, but also that you are doing them.

Is it fair to say "LINQ is evil and makes things slow?" No, it's fair to say that code in general can be unintuitive if you don't know what's going on. There can be subtle side-effects whose time can get multiplied inside of a loop. This includes type checking, type conversion, boxing, threads and more.

The Rule of Scale: The less you do, the more you can do of it.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service
June 26, 2012 22:58
Not to mention, appropriate use of LINQ to concatenate multiple queries together can produce your result by making only one pass through the data, with very little memory overhead - no intermediate buffers between stages, etc.
June 26, 2012 23:08
Also, rather than making blanket statements, let us also consider that "a whole generation" of software developers went ahead of us and lamented high-level languages.

Two things to note:
1) While it is not by any means an excuse to code poorly, most software devs aren't working on code that needs to be performant. Period. The cycles are there idle on the box and I would favor readability and maintainability over something that shaves 10% off of a small fraction of a second.

2) When there are areas of the software being built that need to eek performance out of the hardware, I would intentionally put developers in that dev seat who understand performance. It's also a good opportunity to pair up and mentor someone who otherwise, ahem, degrades your project with Linq statements.

Finally, I just spoke with a dev last week who was bashing Linq performance, based on his experience with scaling Linq to SQL several years ago. Things have changed, home slices.

It's important to keep an open mind, consider advances in the functionality and perhaps, for some, lose the ego and admit that in many cases we're, indeed, not as smart as the compiler.
June 26, 2012 23:40
Hey Scott, here are a couple of utility extension methods I use ALL the time for quickly doing these types of tests

public static class ActionExtensions
{
public static TimeSpan AverageThis(this Action p, int numberOfTimesToRun)
{
var spans = new TimeSpan[numberOfTimesToRun];
for (int i = 0; i < numberOfTimesToRun; i++){
spans[i] = TimeThis(p).Elapsed;
}

double vAverage = (from timeSpan in spans
select timeSpan).Average(pSpan => pSpan.TotalSeconds);
return TimeSpan.FromSeconds(vAverage);
}

public static Stopwatch TimeThis(this Action p)
{
Stopwatch vStopWatch = Stopwatch.StartNew();
p();
vStopWatch.Stop();
return vStopWatch;
}
}

public class Demo
{
public void TimeThisDemo()
{
long elapsedMilliseconds = new Action(() => Thread.Sleep(200)).TimeThis().ElapsedMilliseconds;
}

public void AverageThisDemo()
{
double elapsedMilliseconds = new Action(() => Thread.Sleep(200)).AverageThis(1000).TotalSeconds;
}
}

June 26, 2012 23:41
Also - any way for me to put my code in so that it gets formatted properly on your blog?
June 26, 2012 23:52
I only came out of university last year and have been working on a legacy product since - upgrading and modernising the back end so it is maintainable. For this project, we are aiming for a solid product that can scale, maintainable and readable (you wouldn't believe the amount of bad, redundant or copy paste there is in this product).

In this case, I will nearly always favour the small lambda/linq statement over a 5-10 line for loop performing the same job.

I trust the compiler, but like Scott said, its all about the judgement.
June 27, 2012 0:58
What really scares me is Entity Framework. Seeing a lot of new developers that can't write a proper sql anymore and wondering why the application performs slow.
June 27, 2012 1:18
Is there any advantage to declaring the Stopwatch to be of type var?

var sw = new Stopwatch();
June 27, 2012 1:25
The tests you are doing seem less about the compiler and more about the Linq-to-objects implementation. There's no compiler magic going on in Linq (besides the optional SQL-like DSL, but that has nothing to do with performance). I agree with your conclusion but I don't think the blog title really matches what you were communicating.

The first code example is interesting because Count() actually does optimize for ICollection.Count already.
June 27, 2012 1:30
@Slapout: var is just compiler magic for using type inference. The sw variable is of type Stopwatch. Using var just makes for less redundancy in that line of code.
June 27, 2012 1:31
I really like this post, it makes a really important point. So often I hear something along the lines of 'you shouldn't use x technology because it makes things slow' but with no empirical evidence to back up the claim. Inevitably what you find when you look into it is that sure if you write bad or poorly thought out code in any framework it will perform poorly.

@GiuseppeFalce EF is a great example of this, poorly written EF queries undoubtedly perform terribly however when it comes well structured queries EF can actually perform really well. This is no different to if you are writing efficient or inefficient SQL statements. (In the essence of providing evidence for my claim heres my data http://blog.staticvoid.co.nz/2012/03/entity-framework-comparative.html)
June 27, 2012 1:31
Eric - Yes, put them in a PRE tag.

dcstraw - I'm using the term "compiler" in the broadest sense but your point is taken.
June 27, 2012 3:15
I find that it's smart to always write LESS code. This is true for the following reasons:

1. Easier to read. The shorter the better. Nobody wants your code equivalent of War and Peace.

2. Less bugs. There is a ton of evidence that bugs are a function of lines of code. New/Poor developer maybe a bug every 5-10 lines of code. Good/Great developer maybe 10, 20 or even 50. Writing less code means you are writing less bugs.

3. Libraries. Most problems are solved. Don't re-solve them yourself. I can't tell you how many people don't know about File.ReadAllText() or Path.Combine() and keep rewriting it themselves. Libraries are constantly improved and are tested by millions. Use them.

4. Runs faster. Short code not only takes less CPU cycles to run, but also is more likely to fit in the CPU cache.

For this reason, .Count() should be used unless it is seen to be a performance bottleneck in the application. Turns out it's nearly the fastest anyway.
June 27, 2012 3:27
I do wish that there were more excellent profilers out there that don't cost and arm and a leg. Although, I've been pretty happy with the free version of the Eqatec Profiler for most things.

However, recently I've been working on a parser/lexer which will be used as a library in a few projects. In this case the code itself is so fast that profiling it will hurt performance too much in certain areas and lead you down the wrong path. So in fact I've had to resort to custom Stopwatch timings much like what you're doing here.

I would think that anyone creating libraries that tend to be CPU-bound is going to run into the same situation at some point.
June 27, 2012 4:27
Linq bad for performance? I don't care (provided it's true, which blanket statements like this rarely are), it's good for my productivity and maintainability, which is way more important in my book than a couple of ms.
Knagis, whoever you are, go program in assembly (it's insanely fast), while I do serious work.
June 27, 2012 6:08
"The compiler creators aren't idiots" - very true. When you look at the optimizations the compilers pull off, it's way beyond what most any mortal could do by hand. Still, there are disconcerting cases like local variables making .NET code slower, which besides reminding us that the compiler is not perfect, put that much more "fun" into microbenchmarking - and probably in some cases crafting real-world performant code.
Ed
June 27, 2012 7:08
You should be concentrating on writing *readable* code.

Wasting time on unneeded efficiency is bad but so is writing lots of poor code and thinking you are adding value.
June 27, 2012 12:06
As always this is a good subject for debate. I think there are several underlying causes that make people do these micro benchmarks.

First of all, doing this while writing code is premature optimization, period. Write your feature first and then, as you do performance tests on it, see what happens. It could already be sufficiently fast.

Second of all, I think this usually comes from a lack of understanding what is actually happening with your code. When Linq was first introduced and I started using Linq to objects, I made some bad queries myself, which indead where very, very slow. The question is, do you take the time to figure out why some code is slow?

I did write the 'old school' equivalent of the Linq statement that gave me problems and found debugged the h#$% out of my Linq statement and found out there was some other underlying problem I could fix by restructering my statement.

Another thing that I miss with a lot of developers is cost awareness. Trust me, any company who can choose between spending a $1000 more on hardware or having you 'optimize' your code for a week will go for the better hardware. It tends to be cheaper to spend money on hardware then it is to spend money on a developer.

So my point is: don't optimize early, know what your code is actually doing, and always be aware of costs.
June 27, 2012 12:52
I would never prematurely optimize any code, after-all, it's far cheaper to profile when the product is finished if you even experience performance issues in the first place.

I don't think LINQ is as dangerous in this context as much as L2S or L2E, in which case you have more than enough rope to hang 10 men, that's where the real problems will lay.

Micro-benchmarking seems like premature optimization to me, which, correct me if I'm wrong, is a bad thing...?
June 27, 2012 13:05
This is something very interesting


void Main()
{
var enumerable = Enumerable.Range(0, 9999999);
var array = enumerable.ToArray();
var sw = new Stopwatch();
int c = 0;

// approach 1
sw.Start();
for (int i = 0; i < array.Length; i++)
{
if (array[i] > 10)
c++;
}
sw.Stop();
Console.WriteLine("Enumerable.ToArray()");
sw.ElapsedMilliseconds.Dump();

// approach 2
Console.WriteLine("Enumerable.Count()");
sw.Restart();
c = array.Count(x => x > 10);
sw.Stop();
sw.ElapsedMilliseconds.Dump();

// approach 3
Console.WriteLine("Enumerable.AsParallel() (8 procs)");
sw.Restart();
c = array.AsParallel().Count(x => x > 10);
sw.Stop();
sw.ElapsedMilliseconds.Dump();
}


On my machine


Enumerable.ToArray()
42
Enumerable.Count()
133
Enumerable.AsParallel() (8 procs)
51
June 27, 2012 13:21
HI all,

You cheat with your example ! :-)

array is faster than LINQ:
Considering your code:

var enumerable = Enumerable.Range(0, 9999999);
var array = enumerable.ToArray();
the second line take 100ms....
If you do this:
var array = Enumerable.Range(0, 9999999).ToArray();
first example take 33ms... the second (LINQ) take 172ms

CQFD...

I coudn't expect LINQ to be faster than FOR....


IBERSERK
June 27, 2012 16:36
Overall I find it's not "Linq" being the problem but people not knowing linq but using it to be the performance problem. As pointed out the compiler can bang through array's and preform tasks it gets slow when it has to convert one data type to another cause there are a ton of things it has to check, or when people use the wrong function for the task at hand. One of my favorite was a person that would use linq to get a count of an array where items in the array had a certain sub string. he then used that count in a for loop to go through the array and pull out the values with the substring and store them into another array. He complained it seemed slow, I took his 6 lines of code made it 1 really short line let linq do all the work removed the for loop all together and it zing in speed. So I had to explain to the boss Linq isn't bad or slow just need to know how to use it. Since linq is so powerful and easy, people tend to not use it properly. This is overly especially true when it comes to Linq to SQL. Mainly cause there is so much flexibility that people can do whatever they want without really knowing what they are doing, and they tend to do things that make their code much slower not cause the compiler or L2Sql is not efficent, but cause they told L2SQL to not do it in an effective manner and the compiler says hey your the coder if you want it to run this way be my guest. So till the compiler starts telling coders they are dumb and it should be done this way. Things can appear to be a compiler issues but really 99% of the time the issues is between the chair and the keyboard.
June 27, 2012 16:48
In terms of CPU performance, the only time I've had to swap out a LINQ implementation for a hand-rolled replacement was when doing an Except(). Replacing the LINQ call with a hand-rolled implementation knocked off a noticeable overhead.

The other thing to watch out for is memory pressure. It's easy to see a tight loop performing fine and think nothing of its impact on the GC, especially if you're using a lot of projections. It takes a while for the GC to collect, and you won't notice it 'in the small' when profiling.
June 28, 2012 0:59
@Eric Ziko: A couple of comments;

First, I don't think you gain anything by returning the Stopwatch instance from your "TimeThis" method instead of the elapsed time.

Second, in your "AverageThis" method, the line: from timeSpan in spans select timeSpan has no effect; you can replace it with spans and get the same result. (The only time this construct is useful is if you're returning data to the caller, and you don't want them to be able to cast it back to the original array or list.)

Third, by average the TotalSeconds of the elapsed times, you're reducing the resolution of the stopwatch. It would probably be better to average the Ticks instead.

Finally, there's no need to create an array to store the elapsed time for each result; you can simply sum the times and then divide by the number of iterations:


public static class ActionExtensions
{
public static TimeSpan TimeThis(this Action p)
{
var stopwatch = Stopwatch.StartNew();
p();
stopwatch.Stop();
return stopwatch.Elapsed;
}

public static TimeSpan AverageThis(this Action p, int numberOfTimesToRun)
{
long averageTicks = Enumerable.Range(0, numberOfTimesToRun)
.Sum(_ => TimeThis(p).Ticks) / numberOfTimesToRun;

return TimeSpan.FromTicks(averageTicks);
}
}
June 28, 2012 6:25
I think that in this blog Scott has accurately demonstrated what I was trying to say with my original brief comment and code sample.

I wasn't trying to be either pro or anti LINQ, rather I was for making an informed decision based on (pseudo) scientific observations.

It also only takes a tiny change in requirements or constraints for these figures to swing things in favour of one method or another, hence why I am all for observation and measurement prior to decision making. Over time your experiences will lead you to make judgement calls of one way or another, which is a good thing. However you have to be careful not to be blinded by these judgements and be open to re-assessing the facts that lead you to making them.
June 28, 2012 12:28
Heh heh. There's an issue going around my office about performance at the moment - do we use reflection at run time or not. I sent this article around and everyone is managing to use it support their own, conflicting, point of view.
June 30, 2012 10:30
This debacle should have never happened in the first place. There is really no reason for LINQ to be in any way slower than hand-coded loops. The FtLinq library shows how LINQ expressions can be used to recompile LINQ-to-Enumerable expressions into imperative bits of code on the fly with the respective sweet performance benefit while keeping the code clean and understandable (kind of like Linq2SQL can compile queries for later use). Microsoft should have done something similar with the very first release of LINQ.
July 02, 2012 19:50
I'm second for Paul Reid's and want to add another point to his list.

5. "Less-code" code ALWAYS may be easy replaced with "more-code" code. Opposite is not true.

It easier to replace LINQ with loop rather than loop with LINQ.
July 03, 2012 4:56
I think maintainability is almost more important than performance these days. if it's easier to read, it's easier to maintain, and that in it self will cause less performance issues in the long run.
July 03, 2012 18:04
Absolutely. Always measure. Compiling lambda expressions and invoking a delegate using DynamicInvoke are the slowest method of code invocation on .NET. But does it mean I would not use it? It depends! When I know what is the cost, I can make an informed decision.

I actually do the above fluently here which might be interesting to you.

http://byterot.blogspot.co.uk/2012/05/performance-comparison-of-object.html
July 08, 2012 20:23
Hi Scott,

I tried running your example, but I don't have the assembly for the int.Dump() extension method. Can you provide this? Is it a Microsoft library?

Thanks,
Mir
July 08, 2012 21:00
2 more questions:

(1) Why are you concatenating "Hello" to enumerable.OfType<object>() in Approach#4? Is this because the compiler will know it is type int if you don't add in another type?
(2) I realize this isn't the point of your post, but I am trying to make sense of the code sample as a whole. In approach #4, why don't you evaluate (outval > 10) once you've already parsed the int? Was that an oversight, or am I missing something here?

Thanks,
Mir
July 10, 2012 2:20
@Mir - the .Dump() extension method is available inside LINQPad. Check out the LINQPad FAQ
July 11, 2012 17:58
Thanks, Walter!
July 11, 2012 21:50
Most of the time I ignore these issues. LINQ allows me to express my intent.

If there is a performance problem with the running application, I'll profile it. After that, I'll concentrate on tuning the bottlenecks. I won't waste my time on in-memory code that's executed once.

In my experience bottlenecks tend to be in data access code, server round-trips, or other I/O.

Don't prematurely optimize. Don't micro-optimize.
July 20, 2012 19:45
I have been spending this last week optimizing our code, our code was full of .where inside a loop. That was fine as they tested with 100 items. Then a customer uses 5000 items and you're suddenly having performance problems.

And yes, the developer was saying that "Linq is slow" while they're just not thinking about what's happening behind the scenes.

As for micro optimizations, I used a custom Int.Parse just yesterday. It was about twice as fast so it was worth it as it's used hundreds of thousands of times in our code.
October 15, 2012 20:08
Well said.

Just note the 6th code snippet:

.Where(..).Count()

could be just

.Count(..)

Cheers,

Bruno

Comments are closed.

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.