Short and long game thinking, tests driving design and CRAP metrics
Kent Beck recently posted To Test or Not to Test? Thatâ€™s a Good Question on the complex “theory versus practice” issue of always automating tests, where he states,”Then a cult of [agile] dogmatism sprang up around testingâ€“if you can conceivably write a test you must”. By classifying projects into long game and short game, he argues that ROI becomes a major issue on whether a test stays manual. He says “Not writing the test for the second defect gave me time to try a new feature”, but several people commented that this was a technical debt tradeoff, and Guilherme Chapiewski noted he had done the same thing with a Proof of Concept that went live then he had to rewrite major chunks later.
It is interesting that this ROI discussion is reflecting the experiences of the pre-agile functional automation community. Back in November 2001 (Wow! Long time ago!!), I posted to the Agile Testing list some considerations for not automating . While many of these were from the context of two separate development teams and the automaters using expensive test tools, the risks of incomplete automation and insufficient ROI dominate. The benefits of having the same people both develop the code and the tests are great, and beyond my experience when I wrote that post.
I think the ROI issue for code-based tests will go away over time. Much of the creation of code-based tests is mechanical. Just as programming languages replaced assembler and took care of fiddly details (what registers to use, low level comparisons etc) and build utilities replaced simple text file include statements, I think that soon it will be standard practice to have tool-created unit testing to handle mocking, dependency injection and assert-based testing. Mocking was originally very manual, then tools were developed. Dependency Injection was very manual,then tools were developed. For assert-based testing, we’ve already seen Agitar’s tools , zentest and now pex amongst others. I think these tools will become standard, just as coverage tools are now standard in IDEs when they originally were luxuries costing tens of thousands of dollars. Another variation of this is tools like Celerity recently blogged about by Jeffrey Frederick. Celerity is a fast way to run GUI web tests, but could be handled as a mechanical translation not a manual one. Some meta language could generate Celerity and selected browser tests in a single step.
Mechanically generated tests are cheap to produce and overcome ROI issues. However, they only reflect the current code. The benefits of test design infusing the coding approach are missing. If tests are not being automated for whatever reason, some analysis of the refactoring risk should be done, at least to know where and what the error-prone code is. One way of doing this is using the Agitar-created CRAP metric , which Bob Martin recently blogged about as a way to keep design clean. While I currently believe all code should be created test first wherever possible, techniques like the CRAP metric can highlight the complicated bits for refactoring where possible. While it may be a great intellectual challenge, there is no need to refactor a complex industry standard algorithm. [Aside: is there an inherent advantage to doing test first design all the time? Perhaps, just as renaissance masters only painted and sculpted hand and faces and left the rest to their workshop staff, we only need to focus on core functions for test first and do the rest test last?]
As Kent says,”By insisting that I always write tests I learned that I can test pretty much anything given enough time.” Time is often a rare commodity, so Kent argues compromises are often needed in short goal projects. As Ron Jeffries said in a comment on Kent’s post, “My long experience suggests that there is a sort of knee in the curve of impact for short-game-focused decisions. Make too many and suddenly reliability and the ability to progress drop substantially.” I hope that advancements in mechanical generation of tests don’t push us into a short game perspective, impacting the use of hand crafting tests to drive design. At the same time, metrics that can be run as part of the build to highlight areas for refactoring on all projects are proving valuable (and I’m looking forward to state coverage ). By any measure, these are interesting times we live in. Long live long game thinking!