The background of this blog is so complex that I donâ€™t want to waste my time explaining and yours reading it. Anyway, I wanted to organize my meta-thinking about testing that I do in my current project.
V&V, which stands for validation and verification. We do seek the gap and inadequacy between different project document and code. We care for traceability and stuff like that. It happens too often that item promised simply got lost during the project. Note: on agile projects following TDD I believe this value is dubious.
Make sure that it works not only on developersâ€™ machine. We test in a clean environment, we test with different software configurations. It is simply something that developers donâ€™t like doing. Besides, with experience we become brilliant at guessing which features may be broken in what configurations and limit our effort to those.
We do ask what if questions. We are brilliant at considering: technically challenging cases that developers miss to implement support for; system â€“ logic challenging cases that analysts misses and business â€“ value challenging cases that customers misses. No, we are not better than them at guessing challenges, we simply guess different types of challenges. And by the way we ask a lot of questions up-front but even more while testing (Exploratory Testing)
Not only we find bugs (as a result of above mentioned) â€“ we provide feedback. Usability feedback, features missing, etc. We try to guess how the customer will perceive the final product. We canâ€™t perfectly predict it, but we could provide feedback faster than customer and (based on our experience at guessing it) we are able to guess now things that customer will only discover weeks or months (after go live).
We investigate failure reasons. At least where I work developers do quite good unit testing, so if a button does not work it means it does not work on my machine, with my data in my scenario. But most probably it worked on developerâ€™s machine. Not to mention not-repeatable bugs (one that even on my machine happens only seldom). So it is my work to figure out whatâ€™s the difference between cases when it happens and when it does not. And thatâ€™s sometimes quite a challenge. And I love challenges.
So what follow is a context-specific; subjective(my personal experience based) list of types of value we tester provide to project while testing code (I choose to limit my thinking to so called dynamic testing and do not include such tasks as review):
Provide feedback VS finding mistakes
I was agreeably surprised when developer lead replayed my test report with words "thanks for your feedback". Test Report = Feedback?!
So I think to myself Iâ€™m happy that my services are perceived as "providing a feedback" instead of "finding errors in software" - which I could still find in some testing definitions.