In the conversation about quality, one aspect that is too often overlooked is the quality of the quality effort itself. How does QA know that it is doing a good job of managing its own efforts?
Assessment of software quality commonly relies on quality assurance metrics that quantify specific attributes of the code and its structure. The quantified presence or absence of specific measurable aspects of a body of code are good indicators of how well it will perform and how likely it is to manifest defects.
The same can be done with the QA function to assess whether its processes are likely to contribute to identifying both code and use defects. Three areas that are of particular interest are test case design, test plan management, and defect tracking. The following metrics have proven useful to a number of QA organizations.
Test Case Design
Test case precision is one of the most important aspects of a software quality effort. Of the overall test case pool, what percentage of tests are documented/designed with:
- Exact description of the feature/bug that is the object of the test
- Granularity constraining focus to a maximum of three measurable parameters
- Sufficient test process description to ensure accurate, repeatable use
- Clear go/no-go test success conditions
Make each of the above a yes=1 no=0, and you can rate the quality of your test cases fairly quickly. A more differentiated rating (1 to 10 for instance) provides a more nuanced assessment.
Test Plan Management
Test plans are the road maps by which the test process is conducted. Elemental aspects of the test plan are:
- Feature/specification coverage of each functional aspect of the system
- Organization and grouping of test cases to reflect identifiable segments of the system code
- Sequencing tests by functional interdependence so that prior tests create known test conditions for later tests
- Tracking version management of the plan to the version management of the system under test
These can be rated and assessed in the same way as test cases.
Defects are managed through defect tracking and managing them is the mirror of managing the test plan. Some useful attributes to measure such processes are:
- Clear version/release identification of the system under test and the test performed
- A list of steps to reproduce the defect including any setup conditions
- A detailed description of the defect’s symptoms and how they affect system operation
- A clearly defined and understood rating system for defect impact and fix priority
The most effective use of these quality assurance metrics is to make their collection and improvement a major aspect of someone’s job description.