[Article] Achieving And Recognizing Testable Software Designs – Part I

Recently, I’ve had the pleasure of speaking at a Microsoft Dev/IT Pro Days conference in <?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Belgium.  I was approached by the organizers asking if I would want to do a session on “Designing for testability”, as part of three talks I was to give there.

The topic was not something I had spoken about before, but definitely something I had thought about, considered and wrestled with many times on many projects and occasions.

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> 

I set out to first determine what the definition of a “Testable System” might be, in my eyes. I came to the realization that a testable system is not measured in a vacuum, but its testability has to be “mirrored” through external testing related factors. For example, how easy would it be to write quality unit tests against such a system? And for that question to be answered, one has to ask what “quality unit tests” really means in this context. In this article we’ll try to define what a testable system design really means, and explore some basic design rules to make sure we can keep that testability in the system from the beginning.

 

Here’s my current definition of a testable system:

 

“For each logical part of the system, a unit test can be written relatively easily and quickly that satisfies all the following PC-COF rules at the same time:

 

Partial runs are possible

Consistent results on every test run

Configuration is unneeded before run

Order of tests does not matter

Fast run time”

Read the rest of the article here and read about each of the five rules outlined here.