Scott Hanselman

Hanselminutes Podcast 103 - Quetzal Bradley on Testing after Unit Tests and the Myth of Code Coverage

March 08, 2008 Comment on this post [3] Posted in Podcast
Sponsored By

oopsMy one-hundred-and-third podcast is up. On the recommendation of Chris Sells, I gave Quetzal (ket-zal) Bradley a call to talk about Code Coverage. Quetzal is a Developer in the Connected Systems Division and has some interesting ideas on testing after unit testing and code coverage. Think 100% Code Coverage is enough?

Subscribe: Subscribe to Hanselminutes Subscribe to my Podcast in iTunes

If you have trouble downloading, or your download is slow, do try the torrent with µtorrent or another BitTorrent Downloader.

Do also remember the complete archives are always up and they have PDF Transcripts, a little known feature that show up a few weeks after each show.

Telerik is our sponsor for this show.

Check out their UI Suite of controls for ASP.NET. It's very hardcore stuff. One of the things I appreciate about Telerik is their commitment to completeness. For example, they have a page about their Right-to-Left support while some vendors have zero support, or don't bother testing. They also are committed to XHTML compliance and publish their roadmap. It's nice when your controls vendor is very transparent.

As I've said before this show comes to you with the audio expertise and stewardship of Carl Franklin. The name comes from Travis Illig, but the goal of the show is simple. Avoid wasting the listener's time. (and make the commute less boring)

Enjoy. Who knows what'll happen in the next show?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service
March 13, 2008 0:10
This episode is too funny...just so crazy and out there. I love the fact that you seek out interesting people from very different backgrounds/cultures, that's why I keep listening to your show(I am not a .NET guy). And what Bradley said about time travel was like directly from the movie Deja Vu, I wonder if he helped out with the script?
April 01, 2008 0:41
I know this is unfashionable, and maybe just plain wrong.... but personally I have an aversion to debug assertions. Some reasons I don't like them:

Why use them instead of exceptions?
If using them for performance improvement for the release build, then I'd challenge first of all whether this isn't just a form of premature optimization - and I'd guess usually with no data to prove that this exception code would actually have anything other than a trivial performance impact.

Next, a question... why would you wish an assertion *not* be thrown in the release build? Of course exception code would still be present in the release build, but assertions aren't. Since an assertion should only check for conditions which indicate an unexpected condition, given performance isn't an issue, why wouldn't you want the system to fail-fast once the system is known to be in an unexpected state? (As Rumsfeld would put it ... 'We have a known unknown!')

One statement which was made in this podcast was something along the lines of 'nothing forces a bug to be fixed quicker than a failing assertion'.

I'd challenge this as a differentiator between assertions and exceptions. If exception handling is used correctly, code should only catch exceptions which it can handle, and which do not indicate unexpected conditions. Unexpected (and therefore irrecoverable) exceptions should not be caught and should be allowed to propogate, thus causing the system to fail-fast. If the application is coded correctly, the bottom of the call stack will have an exception handler which performs appropriate error handling prior to termination - usually logging or displaying the error.

I also find that assertions are overused, particularly, I've seen code which seems to wrap most method calls with assertions for post-conditions.

I can understand this (somewhat) if the code being called is third-party code, but if this is internal code then shouldn't this assertion instead be in a unit test? Testing post-conditions through the use of an assertion inside code whose primary purpose has nothing to do with the assertion (it's purpose isn't to test the called method) I would argue is hit-and-miss.

In addition, if you place post-condition tests in unit-test code, it has the advantage that it can be maintained centrally along with the unit which is being tested. Developers of client code, can determine already checked post-conditions by looking at the unit tests. Littering client code with post-condition tests, results in redundant replicated tests, and cluttered code which (IMO) is difficult to read.

Another particular usage of assertions which I personally discourage is the use of redundant assertions - a frequent example - asserting a post-condition for a call, that a reference is non-null, where the very next line de-references the reference.... I've seen this quite often, and my reaction is why do this? The de-reference operation will result in the generation of a NullReferenceException. It's redundant code which I'd argue obfuscates the algorithm.

One use I can see for assertions is to assert a post-condition for a routine, within the routine. Particularly if the routine has a complex algorithm. Even here, though, wouldn't a unit test be a better form for this?

One thing I have wondered about a little, is if an IDE option where provided to 'hide' assertions, or perhaps color them in a lighter color to the surrounding code would help in limiting their affect on readability by making them less obtrusive.

I'd love to hear what you think on this Scott.

Regards,
Phil
April 02, 2008 11:30
A lot of good points in this talk, the most important point being that code coverage does not say that your code is properly tested - it only indicates which code isn't tested. So if you are writing test code just to increase code coverage, you are doing something wrong.
I find that Behaviour Driven Design (BDD) brings some interesting thoughts into how to write the tests. Instead of testing methods, you are aiming to describe the behaviour of your class or component. You describe the behaviour by writing tests that start with "should". The most common example of this is the description of the behaviour of a stack (I have removed test attributes from brevity):
public class StackTest
{
public void ShouldReturnLastPushedElementOnPop() {...}
public void ShouldThrowExceptionWhenPoppingAnEmptyStack {...}
....
}
An added benefit of using this approach, is that the test names serves as a description of your component or class. You can actually read the names and get an idea of how the class or component behaves without looking at the actual test code.
If you feel you have described the behaviour of your component or class properly, and there is still code that is not covered by tests, you should consider whether or not the code is actually needed.
There are a some open source frameworks around that are helpful to use for BDD like .NetSpec, NSpec and NSpecify.

Comments are closed.

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.