What happens to old tests? In short, they don’t just fade away. A major challenge of test coverage management is making sure that system verification test suites don’t get clogged with redundant or worse, obsolete tests.
Manual test operations tend to rapidly cull test cases due to the continuous review they receive from testers. But automated tests tend to raft up like old clothes stuffed into the back of a large closet.
The relatively recent advent of test automation tools that generate test scripts by recording processes and GUI activity has aggravated the issue of test script management. Advertised as engines for quick, accurate test script generation, the code produced by these tools usually must be altered before useful testing can occur.
This tends to consume exactly the maintenance resources they are intended to free up. If used extensively, test recorders can cause a great many marginally useful and very inflexible test scripts to be amassed in a very short period.
Take the output of an auto-generating test tool, combine it with the test scripts created specifically by code developers, then add in tests written by QA engineers to plug the gaps and the body of test scripts can get out of hand quickly. Now salt heavily with functional changes brought about by defect fixes with undesired feature interactions and you have a test array that rapidly goes from unmanageable to unusable.
Automated Test Script Maintenance Is Crucial
A great deal of consideration and concern is applied to the questions of which tests should be automated and which shouldn’t. Test automation works well with those functions and features that are least subject to change, but this has its down side as well. Test automation scripts written for stable code are not reviewed very often, if at all.
This makes them subject to unexpected problems when an underlying code function is changed, modifying the operation of a tested feature. The code changed and the test script didn’t and now it complains with false positives. These erroneous flags clog the results review process and require continuous examination to be sure that the flagged errors are really from bogus tests.
If this goes on long enough, the entire automated test process will be called into question. Management will begin to see test automation as an expensive waste of time and resources instead of the useful product verification tool that it is.
This is why test script maintenance must be included in the initial plans for any test automation project. The cost in resources and schedule time should be part of the plan and be wrapped into the velocity expectations for each Agile sprint as well.
Test script maintenance must be conducted from a clearly explained plan that has the buy-in of management, development and quality, a plan that keeps the script array informative and vital.
Keeping the Test Array Fresh
An effective automated test script maintenance strategy is key to preserving the validity of the test script array. Such a plan grows out of carefully documenting the inception and retirement criteria for automated as well as manual tests. This will encourage a disciplined approach to maintaining the entire system validation process by tying a thorough review of scripts for specific features to a parallel review of their manual tests.
Test Creation Criteria
- Test creation should be predicated on clear criteria for why it is necessary and when it is to be implemented. At the other end of the process, formal test retirement criteria should govern a regular audit of the existing tests. With the pressures to create them, test retirement becomes critical. A firmly grounded retirement plan keeps test suites from growing out of control.
Regression Test Control
- Regression tests are another source of rapid test suite growth. Retirement of obsolete scripts will help keep regression suites under control. Understand that full regression testing and the data sets it entails may simply take longer than the development schedules permit. “Regression test everything” sounds impressive but it doesn’t scale and can undermine your automated testing effort.
Test Script Maintenance Saves Time
New functionality and the business case that it supports are the basis for test inception criteria. When new code creates test coverage gaps, an immediate decision is required to expand the test array or log the gap for future action.
When functionality and/or the code that supports it changes, the tests that verify that functionality need to be rewritten or retired. Test script fragility tends to vary inversely with the manual effort put into writing it. But, that said, any change to a UI screen or the workflow to implement a business process can break all the scripts associated with it.
Broken tests cost the one resource that is always in shortest supply, time. Every release verification should push out any non-functional test scripts that it reveals. The code release process itself should incorporate and depend on this activity.