Businesses, especially software-centric businesses, have dramatically increased the velocity of their operations. Software development itself has gone from bi-annual version updates to weekly releases via the Agile methodology. Managing this eye-watering rate of change has brought every business function to the necessity of monitoring Key Performance Indicators (KPIs) to keep these processes under control.
The principle concept here is the idea of an indicator. A value that, in and of itself, may mean little but has a direct correlation to the operation of an essential business function. As the indicator value changes, it indicates the stability and movement direction of the process. In software development, these indicators are called metrics because they are used to ascribe a measurable value to a process aspect that is subjective enough to make a simple direct numeric evaluation difficult.
Software Quality Metric Controversy
The controversial aspect of metrics is their empirical nature. The metric itself may measure something as apparently irrelevant as the number of lines of code in a module. As long as the module works, who cares how many lines of code it took to create it?
Yet, by tracking the ease of maintenance (hours to fix a bug), the incidence of code defects and their effects on the overall system, it is possible to directly see a correlation between how large a module is and how prone to defect issues it will be. This makes the number of lines of code per module a useful metric.
Software Quality Metrics You Should be Tracking
Their simple, sheer usefulness has made metrics indispensable as both quality and business management tools. In the realm of software development, metrics tend to fall into one of two groups. Detailed code development metrics are direct measurements of the system code itself and provide valuable insights into the design practices of the development group. System metrics are more likely to be of interest to management as they are overall process indicators of how well the business is converting work hours into revenue generating products.
The following will address some important system metrics and leave detailed development metrics for another post.
Agile sprints operate on the basis of ‘stories’ that are actually code modification descriptions. They are written up from the standpoint of the desired effect that the described change will have on system operation. This change is parsed into a code development effort and a QA effort to verify that the story’s intention was accomplished without any disruption to other system facilities. It is also given a ‘velocity’ rating of the necessary resources it will consume.
A vital measure of an Agile SCRUM’s effectiveness is its ability to estimate these required efforts and complete its self-assigned tasks. Stories completed as a percentage of stories attempted in each sprint is a primary metric for this measurement. Better than a simple velocity number, this will tell how well the sprint was organized in terms of taking on those stories that could be accomplished and were actually completed in the assigned time frame.
Management can use the completion percentage to detect bottlenecks in the overall mixture of the development and quality efforts. It points directly to the granularity of stories and the effectiveness of the group working on them.
Test completion steps back to a broader perspective than story completion in that it looks at the process of verifying the system’s changes and overall operation. Successful test completion means that the code changes are being verified and that the verification process itself is working to validate those changes.
Here management is looking to determine how well the system is faring over time. Are the feature additions/defect corrections proceeding quickly and are they working as desired? A warning sign of excessive sprint velocity is the reduction of the test completion rate and/or a rise in the number of test failures. Of particular concern is test case abandonment indicating that the quality process isn’t keeping up with development.
Microsoft coined the concept term, escape rate. It refers to the number of bugs that are found in each release after it goes to production. This is one of the most critical measurements of a software organization as it talks directly to the efficiency of both the development and quality efforts.
Development must be both rapid and methodical to keep up with Agile release rates. Quality has to stay on top of verifying every code addition and change to keep product and company market reputations intact. A rising escape rate is a blaring warning that one or both of these operations needs immediate management attention.
User satisfaction ratings are right next to the escape rate in their importance to managing software development. Escaped defects are sometimes simply ignored by users because the value provided by the product outshines the detriment of having to work around a system flaw. By contrast, some of the most trivial defects will annoy the user base to the point of mass abandonment of the product in which they occur.
Management needs to carefully monitor user reviews for both their overall satisfaction levels and their specific complaints. User reviews are the most effective way of spotting a management beloved ‘feature’ that users consider an annoyance or even a defect.
Align Quality Assurance Metrics with Business Goals
Measuring everything can become an obsession with a detrimental effect on both development and QA. Choose metrics by direct observation of their relevance to your business process. Choose at least four or five to guard against variations in their relevance to your operations. A metric that works only some of the time is worse than none at all. Above all, be transparent with the working staff about what is being measured and why. When they see the positive results gained from these insights they will begin working toward goals that increase what these metrics measure.