Ideally, functional testing follows a carefully designed test plan. Those who spend their days in the Agile/Continuous Integration development trenches will doubtless look at the phrase ‘carefully designed test plan’ with understandable skepticism. Modern software development is schedule driven to release new versions with only small, incremental changes and do so on a weekly or bi-weekly basis. This leaves little in the way of either time or personnel resources to maintain detailed test cases much less thorough test plans.
A work around for this is to employ use cases to determine and manage the test case array. The idea of use cases is that any software system is designed around features that a user wants to exercise to achieve specific results. The use case takes one such feature, posits how it should work, and what results it should deliver. Often, use cases are reflections of business requirements that cause commands and data to move through the system to complete a transaction. The basic idea is still the same, a set of user actions causes responses that result in system data and/or state changes.
Drawing test cases from use cases allows QA to plan testing directly from the system design specification. Each use case will use one or more test cases to verify its correct operation. This helps deal with the crush of Agile development schedules, but it does not cover everything that needs to be tested.
Some Pointers on Use Case Testing
A use case often presupposes a sequence of prior actions that prepare the system for the action described in the case. When the required tests can be performed in this sequence, the test system will be prepared by these prior tests to support the case in question. That said, there is often the need to troubleshoot an issue by repetitive execution of the test case. This means resetting the system to the state that supports that execution before each test.
Simple state setups can be done manually. However, complex setups requiring database record manipulation, state indicator settings and third-party service stubs can consume more time than multiple runs of the test warrant. Complex setup tasks are best done with test automation frameworks. A script can be executed that puts all the pieces in place for a test. The test can be done manually if there are variations to try out or even automated if a single operational activity can be defined.
Functional testing typically follows a progression from the specific to the general. The earlier unit and integration tests do not lend themselves to use cases as they tend to be more granular than feature level tests. Feature verification, user acceptance, regression testing, and sanity testing will all be much more amenable to organization by use cases.
A dearly won maxim of software QA is that test documentation is golden. Use every opportunity to document test case processes and expected outcomes. Refer use case-based tests to sections of the design specification where possible. Use the defect reporting system to document test steps and results for each bug report and make copious use of wiki articles if your company supports that resource.
Test Automation Opportunities
Keep a watchful eye out for test automation opportunities. While rapidly changing code makes a poor target for automation, complex setups (as noted above), mature features and regression tests are good candidates. A use case-based test that is run as-is multiple times per release is one that should be automated. Automation will also press toward documentation as test scripts are themselves documents which guide test case execution.
Two areas that tend to run wide of use cases are sanity checks and exploratory testing. Sanity checks are intended to simply touch each system feature to verify its active presence rather than its detailed functionality. Exploratory testing is, by nature, an ad hoc process where the test tech is creatively exploring the ramifications and implications of a line of investigation that will usually go well beyond a use case.
Use Case Testing Coverage
100% test coverage, making sure that all the aspects and capabilities of the system have been tested in all their permutations, has been the ongoing pursuit of software QA organizations. Numerous tools are available to measure it, a wide array of QA methodologies pursue it and, though dearly held, it is still an unreachable goal. Use cases will not be a cure-all for this problem but they will support making sure that coverage is as extensive as possible.
Targeted testing will verify specific instances that have to be carefully examined due to their impact on system functionality, but they will always lack a big-picture overview. Following each feature through its usage options and combinations with other features will push the testing process to cover as much of the code as possible.