Test Case Review

For years I was wondering why we disagree with people about exploratory testing. Recently (working close with them) I’ve realized part of their thinking: To review test case you have to document it. You have to document it in advance to test execution; otherwise you will run unapproved test cases. You can’t review test cases in exploratory testing because there are no test cases written in advance to test execution.

I wanted to argue with them in this blog, but it appears I’ve created quite a complete guide (at least I believe so) to “test case review: process and goals – options”.

Alternatives
There is nothing only black or white, not even if black-box-testing :) … There are more that two tests: (one positive, one negative) you could run for each use case. Just the same way there are more alternatives than just two: complete Ad Hoc testing and completely (to every detail) documented in advance to test execution.

Each test either be documented or not, reviewed or not. And those are independent options. More over you could both document and review them either in advance to test execution or afterwards – all cases makes sense in certain context. If you know a bit about sets and sub-sets then you will be able to count at least 7 types of test. Well… maybe you could even count more of them because we may have a test case documented but never executed because of lack of time and tests that we completely missed.

Once upon a time I was musing why do we write test case . Well, mostly against writing them… I was unsure back then how to argue with comment The main benefit of test cases written in advance is that they can be reviewed, by test colleagues and developers. Errors in code and test cases can be prevented, and new test ideas will be generated.

Is test cases written in advance the only way to achieve the benefits described?! I don’t think so. For example I agree that errors in code may be prevented if developers know what we are going to test, but we don’t need to document tests to let them know what we are going to tests. We could tell them, can’t we? This is one-to-one communication (one tester and one developer) and in this case face-to-face communication is much faster, isn’t it? Or another example: new test ideas generated. What’s wrong if they are generated afterwards (as opposed to in-advance)? What’s wrong if we execute the most complicated ideas only after the simple ideas are tested? I think it is the correct order anyway….
More over I don’t even think that reviewing tests in advance have a prerequisite to have them documented in advance. I’m used to discuss (face-to-face) test coverage with testers in advance to test execution – this is when I review their test ideas and suggest more tests.

Types of test case review
A “reviewed test case” is an ambiguous term anyway. I know at least two different ways/goals to review test cases (And “reviewed” could mean either of both of them achieved):
a) To review (and improve/extend) the test idea/coverage
b) To review (to make it less ambiguous/interpretation prone) the test description
See more defined in my blog about review process.
If we know that the test case written by one person will (or may) be executed by a different person, we may want to make sure that test description is good enough and not ambiguous. This is one of top 5 issues described by Cem Kaner: to write immense documents that describe all of the details of the testing effort. We review if all the details (we could imagine) are described. It may appear that author have reworked the initial tests for several times, but never added any new test idea, only improved the description of an initial ideas. On contrary if test designer and test executor is the same person we may even ask him to do a walkthrough to help reviewer to understand his test cases and suggest more ideas, but never improve the description of tests so that anyone but himself ever understand it.

I remember once upon a time we have had two levels of review: first I did review the test case style without any knowledge of unit under test (type b) and only then development lead reviewed test coverage. I don’t think it was complete waste, but it took quite an effort… and a lot of calendar time for each test case from the moment it was first designed till it was ready for execution.

Test Case review in advance
Well, if you do review type b (review description, not an idea), then I completely agree that review in advance is most reasonable (before you hand over the test case for execution to another person). For example, because you don’t want the “another” person to do tests that was not intended instead of intended ones. Do you? On the other hand you may do it this way – ask person to execute tests, document the tests executed and then test case original author review test records to make sure if what was executed is what he/she intended. The benefit is this: you will be able to learn how big the deviance is and make conclusions about test author ability to write unambiguous tests (and he/she could learn from mistakes).
But do we need to review test ideas/coverage in advance? How much does it cost you to remove wrong test? Nothing, you just don’t do wrong test, right? How much does it cost to add one more test? Does it changes depending on when you discover that you lack it – before test execution or during test execution? The only issue for not having well documented tests in advance that I could see is an issue of planning tests execution. …

Documenting test cases in advance
Even in project where I’m the only tester and there is no developer or manager willing to review my tests cases, I still write down some test ideas in advance. They are not 100% scripts and I never follow them. What I do? I do an ET and when I think I’m done I read my initial test ideas and it is almost always that I find that I have missed at least one idea during ET sessions (that’s just the same I’ve seen other testers bloging about their ET experiences). However if I compare amount of test ideas initially logged to amount of tests I do in such a project, the amount of test ides originally documented (compared to all executed tests) are something like 10-50% depending on project type.
I’m afraid if I would be required to document all tests in advance, I would end up missing most of the defects I am able to detect otherwise. So perhaps I’m just so bad test designer, although developers tend to ask me “do you QA testing and find the bugs that I know should be there but I can’t find them myself, because there are too much to test …” and they know that I know how to choose what not to test.

Document, review, in-advance: nice to have, probably
When I buy a new car I know I want certain features and more of them are nice-to-have. But even nice-to-have could mean two things. For example cruise-control is nice-to-have. But that have changed over past two years (as my new car has it and I’m used to it now). Let’s assume for example someone would tell me: you could by care with or without cruise control, if you choose one without the car will be 1.5% cheaper. Two years ago I would take one without, but today – one with cruise-control.
I remember we had a spreadsheet for test estimation in our company. I never used it, but I remember that test cases written in advance was estimated to take 5 times more effort than test cases created with prototype (actually an alpha build) at hand. 5 times is not 1.5% it is 500% cost. Our project managers know it is nice-to-have but no way they are going to afford it.

Is the review the goal of a review?
Let me summarize. We could either document tests in advance to test execution, document them post-factum (in run-logs) or not document at all. The only reason to not document them I could imagine is to save time for other activities. Documenting them (or at least a part of them) post-factum is powerful tool well known as Exploratory Testing. It has it’s strengths and weaknesses. The same with review – you could either review documented test, or undocumented ideas, you could do it before or afterwards, you could also skip the review. I’ve seen some projects where they try to review all tests. They all ended up running out of testing resources and poorly tested as a result. On the other hand I’ve seen a lot of additional test ideas generated and bugs found as a result of test review process.
Let me reuse the old agile idea of “just enough” test documentation and just enough reviews. You must find your own just enough and change it all the time – there are no “best practice” of how bit the just enough are!

Browse by topic: