[Article] Achieving And Recognizing Testable Software Designs â€“ Part I
Recently, Iâ€™ve had the pleasure of speaking at a Microsoft Dev/IT Pro Days conference in <?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Belgium. I was approached by the organizers asking if I would want to do a session on â€œDesigning for testabilityâ€, as part of three talks I was to give there.
The topic was not something I had spoken about before, but definitely something I had thought about, considered and wrestled with many times on many projects and occasions.
<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />
I set out to first determine what the definition of a â€œTestable Systemâ€ might be, in my eyes. I came to the realization that a testable system is not measured in a vacuum, but its testability has to be â€œmirroredâ€ through external testing related factors. For example, how easy would it be to write quality unit tests against such a system? And for that question to be answered, one has to ask what â€œquality unit testsâ€ really means in this context. In this article weâ€™ll try to define what a testable system design really means, and explore some basic design rules to make sure we can keep that testability in the system from the beginning.
Hereâ€™s my current definition of a testable system:
â€œFor each logical part of the system, a unit test can be written relatively easily and quickly that satisfies all the following PC-COF rules at the same time:
Partial runs are possible
Consistent results on every test run
Configuration is unneeded before run
Order of tests does not matter
Fast run timeâ€
Read the rest of the article here and read about each of the five rules outlined here.