Defect severity and priority

Questions are frequently asked regarding evaluating defect priority, severity and differentiating between the two values. I would like to explain my vision, based on best practices in our company and supported by IEEE standard, and some recent publications by recognized experts in testing area.
Issue
I chose IEEE standard, because it address issue of evaluating different properties of the defect. For example CMM talks only about severity and type. IEEE (IEEE Std 1044-1993) consider a lot of different values for an anomaly. It requires “The impact of an anomaly shall be considered at each step of the anomaly process”. However I will only talk about initial evaluation by tester. Further IEEE state that “Identifying the Severity of an anomaly is a mandatory category as is identifying the Project schedule and Project cost impacts of any possible solutions for the anomaly.” The standard only suggest priority to be evaluated: “Additional categories that may be useful are Customer value and Priority.”
IEEE see Priority and Customer Values almost the same values even suggesting in Annex that “Generally, this will be the same as the Customer Value“. It is clear about priority – it is option property to describe the customer value or business value to the company. Now the severity. Annex suggests that severity stands for the impact “on the program operation.”. Project schedule and cost properties evaluate time and money “to address a defect or enhancement”. I would like to stress the term address usage that implies development cost to be extended with any other, including testing and risk of regression.
Solution path
Test engineer could naturally evaluate the value of severity: impact on system operation. It is not so simple with schedule and cost. Test engineer can’t/shouldn’t evaluate development cost and time. What test engineer could evaluate is risk of regression and any other type of defects found upon addressing this one. More over if tester would be able to perfectly evaluate such a risk, then isn’t this the most natural way to evaluate the priority for a developer to fix the problems? However most of the testers will say that they are unable to forecast count and size of defects that developers will implement while addressing the defect.
If you look at IEEE suggested severity values/epxlanations (in samples)you will find that severity exactly maps to hidden defects to be found later (if we skip the sentence about unrecoverable data loss). Isn’t this great? I suggest to only care about other types of defects to be found later while evaluating project cost and schedule.
Everyone knows there is a risk of regression defects. The regression risk for a single defect fix depends on development activities required to address the issue (do we change core or standalone functionality, how many modules involved, etc). So this is again what developers could recognize best.
I believe that we only have got one more type of defects to be detected when the defect is fixed. I call them “blinking defects”. The idea is not so new. Let see what the authors of "Lessons Learned in Software Testing" says about defects to be found later. Lesson 186: “Never budget for just two testing cycles.” Beside describing regression and hidden bugs as well as bugs fixed incorrectly authors point out as the first item “As you learn more about the product you will think of new and better tests”. If it would be so simple we would be able to simply repeat the first cycle before all defects are fixed and avoid finding those types of defects later. I would like to re-design this idea to “As product become functionally stable (quality) you will think of new and better tests“ .
The issue is that you can’t evaluate the product stability evaluating single defect/anomaly. I will address this final issue in my future post “run-log evaluation”.
Conclusions
Tester must naturally evaluate only the severity of the defect in terms of defect impact on the system operation. Tester may suggest other values, such as priority in terms of business value (or even evaluate them, in case when tester is wearing other hats in the project, such as business analyst, system architect, etc).
Optionally tester could set the property describing impact on test data, test environment or even company environment (e.g. impact of local network) along with probability for a tester to occasionally repeat the problem (supposed that he is informed to avoid it). However such an issues seems to be exceptional and such defects could be alternatively processed/escalated without supporting property that evaluates this option.
Additional considerations (next post ideas)
Additionally tester should evaluate risk of what I call “blinking defects” and suggest strategy for defect fix further prioritization based on that. However this is not a value of a single defect/anomaly. “Blinking defects” are property for the set of anomalies detected in a single test case/test procedure or functional unit.

Comments

Ainars - welcome, it is good to see my not so subtle plug over on QAForums worked :-) . Three people converted to the world of blogging on Antony Marcano's, our ever generous host, site...alright!
Good to see you have defined the context and I look forward to reading more of your thoughts, the conversions we had at ICS Test in Germany earlier in the year were enlightening.

Neill McCarthy
"Agile Testers of the World UNIT !"

Neill,
Thank you. This is a lot thanks to your support that I do post my ideas here and there on QAForums. I have some ideas ready for posting here so I will post several of them this week. And I want to share my latest project experience astonishing even myself.