Scott Hanselman

Benchmarking .NET code

February 25, '16 Comments [28] Posted in Open Source
Sponsored By
You've got a fast car...photo by Robert Scoble used under CC

A while back I did a post called Proper benchmarking to diagnose and solve a .NET serialization bottleneck. I also had Matt Warren on my podcast and we did an episode called Performance as a Feature.

Today Matt is working with Andrey Akinshin on an open source library called BenchmarkDotNet. It's becoming a very full-featured .NET benchmarking library being used by a number of great projects. It's even been used by Ben Adams of "Kestrel" benchmarking fame.

You basically attribute benchmarks similar to tests, for example:

[Benchmark]
public byte[] Sha256()
{
return sha256.ComputeHash(data);
}

[Benchmark]
public byte[] Md5()
{
return md5.ComputeHash(data);
}

The result is lovely output like this in a table you can even paste into a GitHub issue if you like.

Benchmark.NET makes a table of the Method, Median and StdDev

Basically it's doing the boring bits of benchmarking that you (and I) will likely do wrong anyway. There are a ton of samples for Frameworks and CLR internals that you can explore.

Finally it includes a ton of features that make writing benchmarks easier, including csv/markdown/text output, parametrized benchmarks and diagnostics. Plus it can now tell you how much memory each benchmark allocates, see Matt's recent blog post for more info on this (implemented using ETW events, like PerfView).

There's some amazing benchmarking going on in the community. ASP.NET Core recently hit 1.15 MILLION requests per second.

That's pushing over 12.6 Gbps a second. Folks are seeing nice performance improvements with ASP.NET Core (formerly ASP.NET RC1) even just with upgrades.

It's going to be a great year! Be sure to explore the ASP.NET Benchmarks on GitHub https://github.com/aspnet/benchmarks as we move our way up the TechEmpower Benchmarks!

What are YOU using to benchmark your code?


Sponsor: Thanks to my friends at Redgate for sponsoring the blog this week! Have you got SQL fingers?Try SQL Prompt and you’ll be able to write, refactor, and reformat SQL effortlessly in SSMS and Visual Studio. Find out more with a free trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb
Thursday, 25 February 2016 07:35:50 UTC
Another good news from Hanselman :)
Thursday, 25 February 2016 07:43:28 UTC
Great Article and love where .Net is heading.
Just to point out a spelling mistake. It should be "called BenchmarkDotNet" and not "celled BenchmarkDotNet". And link to the github repository should be nice here too.
Thursday, 25 February 2016 08:05:31 UTC
Ahmad - Sorry! Not sure how I missed that! Updated
Scott Hanselman
Thursday, 25 February 2016 08:09:50 UTC
> What are YOU using to benchmark your code?

https://github.com/Orcomp/NUnitBenchmarker

I hope benchamrking will become as ubiquitous as unit testing.

What I would really like to see is an open source initiative to properly define interfaces with accompanying unit tests and performance tests making it easier for people to compare their implementations on a level playing field.

Json serialization comes to mind, as well as various tree implementations.

Being able to compare the time complexity of different algorithms or data structures is also important. Some data structures can perform very well for small collection, but quickly deteriorate when the collection size grows. Also understanding how various data structures affect the GC as collection sizes grow, is also important.
Thursday, 25 February 2016 08:46:35 UTC
As a contributor of BenchmarkDotNet I just wanted to mention that BenchmarkDotNet will most probably support DNX451 and DNXCORE50 in the next few weeks. So stay tuned!
Adam Sitnik
Thursday, 25 February 2016 08:51:08 UTC
Excellent article. Quick correction: "ASP.NET Core (formerly ASP.NET RC1)" is missing the 5
Stuart Lang
Thursday, 25 February 2016 09:13:33 UTC
Awesome News!
Thursday, 25 February 2016 09:25:16 UTC
Andrey's last name is AkiNshin, not Akishin.
Thursday, 25 February 2016 10:38:32 UTC
His surname is AkiNshin, not Akishin :)
lalka
Thursday, 25 February 2016 10:49:02 UTC
Nice article.
Thursday, 25 February 2016 11:47:49 UTC
Currently playing with NBench (https://github.com/petabridge/NBench).

Roijtek
Thursday, 25 February 2016 13:49:44 UTC
The AKKA.Net guys from Petabridge have a similar utility called NBench. Here's an intro blog:

Introducing NBench - an Automated Performance Testing Framework for .NET Applications
https://petabridge.com/blog/introduction-to-nbench/

The motivation behind it's birth was interesting. They've integrated the tool in to the build pipeline to ensure pull requests and patches don't negatively affect performance.



Ameer Deen
Thursday, 25 February 2016 14:30:58 UTC
Sundial

But that project is morphing a bit and will probably be added to a suite that I'm making. Plus I need to find time to optimize the graph generation for memory usage, etc.
James Craig
Thursday, 25 February 2016 14:55:31 UTC
I didn't know that library. Thanks!
Thursday, 25 February 2016 17:56:30 UTC
A nice step towards the right direction. Now all it needs is # of allocations. That definitely helps to correlate that allocating unnecessary memory does have a dramatic affect on your benchmarks.
Corey
Thursday, 25 February 2016 19:04:30 UTC
I have been using the Load Test Toolbox to find perf problems in web requests.

https://github.com/stevedesmond-ca/LoadTestToolbox

It puts out "nice-ish" charts too

mike j
Friday, 26 February 2016 01:50:00 UTC
This is awesome! I was writing my own data structure library for .Net and was curious how my code was performing compared to .Net's System.Collections.Generic versions. I used the StopWatch class to benchmark them, but this looks like more robust and feature rich. :)
Gopal Adhikari
Friday, 26 February 2016 13:01:30 UTC
@Corey

With regards to # and amount of memory allocations, BenchmarkDotNet already has that see http://mattwarren.github.io/2016/02/17/adventures-in-benchmarking-memory-allocations/
Saturday, 27 February 2016 15:14:36 UTC
I use almost entirely ETW to truly benchmark and profile code at the same time. The library looks nice for micro benchmarks. I miss from the summary table first call effects. What if your method is only called once during startup but the JIT time is astronomically due to a JIT bug? If something is often called its performance is easy to measure. The most annoying user aspect of waiting for the application is startup. There all the pesky one time init effects hide which can become quite complex to measure.
Sunday, 28 February 2016 17:13:00 UTC
12 GBPS is like a dream for me :) Liked your article very much
Wednesday, 02 March 2016 11:16:54 UTC
@Alois Kruis

BenchmarkDotNet has a "SingleRun" mode that should give you what you need. See this sample for how it can be used
Wednesday, 02 March 2016 11:18:25 UTC
@Alois Kraus - apologies for spelling your name wrong in my last comment
Wednesday, 02 March 2016 13:52:48 UTC
Very informative... Thanks
Surya Prakash
Sunday, 06 March 2016 01:56:36 UTC
We benchmark our code all the time. For serialization we have done this:
SERBENCH here are some charts:
Typical Person Serialization. What is interesting is that the most benchmarks provided by software makers are very skewed, i.e. serializers depend greatly on payload type, as our test indicated. Some things taken for granted like "Protobuff" is the fastest are not true in some test cases you see unexpected results. That's is why it is important to execute the same suite of test cases against different vendors. Sometimes benchmarking something is more work than writing the component, for example SpinLock class has many surprises.

Another thing that 90% of people I talk to disregard: SpeedStep. Try to disable it via control panel and dont be surprised that you get 25% perf boost EVEN IF you ran your tests for minutes. SpeedStep does magic tricks, it lowers you CPU clock even if the CPU is swamped, but not on all cores. Running tests from within VS(with debugger attached) can slow things down by good 45-70%. So I would say:
a. Test different work patterns
b. Build with Optimize
c. Run standalone (CTRL+f5)
d. Disable "Power Saver"/SpeedStep
e. Run for a reasonable time (not 2 seconds)
f. For multi-threaded perf dont forget to set GS SERVER MODE, otherwise your "new" will be 10-40% slower than it could
g. Profilers sometimes lie! They don't guarantee nothing. Great tools but sometimes they don't show whats happening and may be source of Heisenbugs
Monday, 07 March 2016 10:18:53 UTC
If you don't want the small business marketing plan pdf title painted on the car, think about using magnetic indicators.
Wednesday, 30 March 2016 15:35:34 UTC
Hopefully, this approach will become an industry standard one day.
Wednesday, 20 April 2016 08:14:41 UTC
I saw matt give a talk on this last night and this is a great piece of software. One day performance testing will be as routine as unit testing is today.
Wednesday, 18 May 2016 04:30:07 UTC
What if your method is only called once during startup but the JIT time is astronomically due to a JIT bug? If something is often called its performance is easy to measure. The most annoying user aspect of waiting for the application is startup. There all the pesky one time init effects hide which can become quite complex to measure. harga mazda 2 , harga honda pcx 150 , harga suzuki address
givanda
Comments are closed.

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.