Thursday, May 5, 2011

A Strategy for Risk-Based Testing

In the ten years that I have worked in the software industry, I have come to realize that there is a widespread, but essentially wrong, belief amongst most software companies that software quality is a function of testing. The fact that you test an application extensively does not render it more stable, nor does it provide added value to the customer. How many times have you heard, "This software is not stable enough, we should have had more QA."? Unfortunately, testing has nothing to do with stability. You test to ensure that the software functions as it was initially requested by those who dictate the requirements. Therefore, not testing software increases the risk that the software may not comply with the requirements and won't necessarily provide the expected added business value.

This leads us to risk-based testing, which I would describe as Steve Wakeland defines IT risk, as ‘the likelihood that a program fault will result in an impact on the business’. Further, I would specify an impact on the Requirements. Let me explain. 

The risks are classified  with severity ranking, as do most people, and define severity by measuring the impact, a requirement has on the business. The severity ranking is usually high, medium or low. High meaning the customer definitely requires it and cannot move without getting those delivered and  medium meaning the user has to use some alternative workaround to move without this and achieve the aimed goal and low meaning the customer can STILL work with the software without the actual usage of it.(feel free to specify additional levels for a more granular ranking). 

Thus, testing and risk (as defined by Steve Wakeland) are related. How? Is it a straight line as shown in Figure 1? In that case, the only strategy is: Test anything; it’s sure to lower the risk. 

Step 1: Identify the ‘vital’ Requirements that could  help the customer.  A good example would be pulverizing the requirements into set of scenarios and identify the feasibility of executing them based on the rigidness. However the rigidity is, the entire set is to be considered 'vital'. Since the risk increases with the frequency of use, you should look at the most used feature by a user to identify the riskiest ones.  

Step 2: Design and then assign test cases to each functionality listed in Step 1. 

Step 3: Size (in hour or minutes) the QA effort required to run the test cases identified in Step 2. 

Step 4: Sort test cases in ascending order of effort so you have the test case with the minimum effort first. 

Step 5: Start running test cases in the order established in Step 4 until you run out of time.  

Further Reading
Steve Wakeland: Testing as a component of an organizational IT risk management strategy. Cutter IT Journal, August 2002

About the Author
Stephane Besson has explored many aspects of the IT world. As a consultant, he developed AutoCAD-based applications, as well as managed software implementation projects for companies throughout Quebec. As Research and Development director for Karat Software (now Freeborders), he applied XP and Scrum practices to manage J2EE/Oracle projects, before implementing the Rational Unified Process. For the past two years, Stephane has been leading software development at Sigma-RH Solutions using Scrum methodologies for human resources solutions. Stephane Besson can be reached at

No comments: