Best Practices and System Testing

August’s Software Test & Performance magazine caused me to reflect on best practices. Matthew Heusser article (p.42) refers to the context-driven school of testing saying “There are good practices in context, but there are no best practices”. While I completely agree that it is senseless to speak about a practice without its context, I believe that the word combination “best practices” is used by most people as a name for “good practices in certain context”. It is no good to look for “best” and “practice” in a dictionary; it is rather a stable word combination now: looking for “hot” and “dog” in a dictionary doesn’t help much for the meaning of “hot dog”. If somebody understands it directly, as the “best” practice to blindly apply – it is definitely a problem.

It is true that context usually isn’t defined well for practices. Quite often it is impossible to define context for a practice completely – you could have millions of exceptions for one practice. So its application to a context is usually left to a reader.

Still another article in the issue, “Raising the Curtain on System Testing” by Ambighananthan Ragavan (p.21-26), looks like Matthew and the context-driven school warn against. While there are some good points in the article, some statements startled me. Without even clear definition (I haven’t quite realized how Ambighananthan sees the difference between performance and concurrency testing), the author makes statements like: “comparatively speaking, you should choose the number of test cases in ratios similar to these below:
Performance 10
Reliability 7
Concurrency 4
Stress 3
Scalability 8”

I wouldn’t say something like this even for a very well define type of systems: depending on context it can differ drastically.

Then Ambighananthan wrote: “Here’s a heretical thought: If you really want to make a product totally free from defects, you can… If we truly make an effort and maintain a never-say-die attitude, I believe that a product can be made 100 percent defect-free”. I still don’t quite understand how a person doing system (load, performance, etc.) testing can say that. Perhaps limited experience, just never worked with distributed business systems involving multiple third-party tools? Ambighananthan’s bio states “has been involved in NMS product testing for the past three years.” NMS probably stands for “network management system”. I never was involved in testing such products, perhaps they are simple enough and you can exactly define what amount of each type of testing is necessary… Although I never thought about them this way.

Comments

[textile]It seems you have essentially responded to your own initial point. I assume that this was your intention. It is exactly people like that that have resulted in a need for being explicit about the appropriateness of a given practice.

Despite any good intentions behind a given interpretation of 'Best Practices', there are those that will misunderstand, misuse or abuse the concept and talk of them as absolute-best-practices rather than as good-practices-in-context.

Unfortunately, in my experience, there are still too many who latch onto a 'Best Practice' like a security blanket and don't seem to know how to function or have inner-peace when they perceive that others are preventing them from employing them. This is captured beautifully by "Alan Richardon's article(PDF)":http://www.compendiumdev.co.uk/context/itdepends.pdf in which he describes a 'Methodology Monster'.

I think a lot of hassle (dealing with methodology monsters) and misunderstandings can be avoided if we promote the idea of referring to a way of doing things as a good-practice-in-context.

By the way, I loved your 'hot dog' analogy... But, let's say, as a child, you had never seen or heard of a hot-dog before but you did comprehend 'hot' and 'dog'. If someone offered it to you, you would be forgiven for thinking that the frankfurter was dog-meat. As an adult, with more experience, you would discount this if you are living in a culture that doesn't eat dog-meat (thus applying context) and instead conclude that it is obviously just a name and couldn't possibly be a dead dog. This, I think is the problem caused by the phrase 'Best Practice'.

Those who are inexperienced and aren't told that it is a good-practice-in-context may take it literally and consider it as absolute-best-practice. As their careers mature and gain more diverse experience, one hopes that they realise that it is unlikely that there is such a thing as an absolute-best-practice.

Due to the amount of inexperienced people out there (and I don't just mean in terms of years - I also mean in terms of diversity of their experience) the 'absolute' misconception then permeates throughout the industry and before you know it, we have a need for a distinction between 'Best Practice' and 'Good Practice in Context'...

They then have to unlearn what they have learned. The more years that go by believing in absolute best practices the harder it is for people to see it any other way... As they say...

An old (hot) dog can't learn new tricks :-)

Antony Marcano

In part the issue is with absolutists and the lack of quantifying both the practice and the context.

Yes, each context is different to the n'th degree, however part of this is addressed by the "patterns" view where the common elements are used as a way of describing the pattern and it's effectiveness within certain given conditions (the context). These patterns can the be applied to the appropriate context more easily and with the appropriate metrics the effect of the application of the stated pattern or a delta to the stated pattern can be applied.

There is additional danger in the unthinking stated absolutes that are not quantified in any way that is meaningful or demonstrated to a context that are really dangerous - more so when it is stated as "industry" best practice and the sector is not stated this is an open invite to abuse by unthinking managers and organisations. It is this methodology monster abuse Alan Richardson addressed in the version of the talk I saw; more so then he has in the slides and supporting paper. I agree with Antony it is an excellent talk; at this stage I state an intrest I work with Alan and he is a friend which I hope has not removed the objectivity of my view of the talk and it's qualities.

In part the "context" approaches assist with this, however many of the voices at conference and in print are of the "absolute" view and the floor in my experience has mainly been held by the absolutists. I have not got any research into the relative numbers in Bret Pettichord's "Four (maybe five) Schools" view but from my experience the context driven school make up a minority still.

This is why I liked Brian Marick's work, www.testing.com , (now seemingly on the back burner) of test patterns and some of the work being done by Vipal Kocher on Q-Patterns over at www.whatistesting.com as approaches to assist with this change of thinking.
Can we change the absolutist? Maybe...

Neill McCarthy
"Agile Testers of the World UNIT !"

The apparent popularity of the 'absolute' view may be due to lack of knowledge of the organizers or perhaps because they fear that they may scare off most of the conference-attendee market with anything more complex than an absolute view.

What does this say about our industry? (I ask that rhetorically, but feel free to elaborate)

I feel a paper coming on... "Absolutism or Context-Driven : What do you want to be?" :-)

Antony Marcano

I just saw your comments/feedback about my article in stmag.

(I haven’t quite realized how Ambighananthan sees the difference between performance and concurrency testing)
....
Performance and concurrency are carried out for two different purposes.In concurrency what we do is to test whether "N similar processes" are able to work together at the same time, or "N different processess are able to work together at the same time". Here we dont measure the performance because the bottom line is to make sure, N processes can be executed at the same time . And in performance testing, we just run 'one process' and try to see how it is executing with respect to performance and funtionality. definitely both of them are two different worlds with some overlaps

...Another of your comment says about the ratio's of test cases to be chosen, though this is a general statistics, i agree with you, it might vary according to the product.

About 100% defect free software, i think that should be the attitude of all test engineers with respect to software and quality. Unless we believe in that, software industry is going to be in a big mess after a period of time.I'm writing a book called "Genetically Modified Software" where i talk about how to achive high quality software with very few defects and how to reduce the software development cycle.

... perhaps they are simple enough and...
Btw, the product i worked on is Resource Manager Essentials 4.0, to know the magnitude of the product, just search in google.It is one of the leading NMS product of Cisco, in the market.