Functionality testing is the examination of a coded system’s response to expected usage. Notice that doesn’t talk about cosmetic appearance, performance, security or compliance of any sort.
What is Functionality Testing?
Functional testing is a process of verifying that a system performs as expected when its features are exercised by another system or directly by a user. This means that it lends itself nicely to test case and usage case definitions that can provide a stable, repeatable basis for evaluating the progress of system development.
The entire range of the development process comes under the purview of functionality verification.
- Unit tests should start at the very beginning to ensure that each block of code performs its intended manipulation of inputs into desired outputs for the next module.
- Integration tests assure that the unit modules connect to each other as expected and convey data and commands throughout the system per the specifications to which it was built.
- Sanity checks verify that modifications and fixes applied to the code body don’t have unexpected side effects in, apparently, unrelated parts of the system.
- Regression tests verify that later feature additions and bug fixes don’t undo previous efforts or interact with them to cause wholly new problems.
- Usability acceptance is the actual operation of the system in the context in which it was designed to be used and is the gateway to deployment.
All of the above are varieties of functional testing and they all contribute to the creation of a software system that is ready to deploy for its intended use.
Functional Testing Approaches
Three approaches are commonly used to implement functional testing.
1. Black Box Testing
Black box testing takes an array of inputs and looks for the generation of specified outputs. The idea behind the name is that the contents of the code under test is unknown to the test case and, by definition, to the tester who is only concerned with function verification.
2. White Box Testing
White box tests are at the other end of the spectrum. They are predicated on knowing exactly what is going on with the code under test and tests are executed primarily to verify the robustness of the code rather than its absolute functionality.
White box testing is performed at the beginning of the development process with unit tests and into the early parts of the integration phase. Black box test work is typical of the latter phases where response to specific operating scenarios is important.
3. Gray Box Testing
The third type of testing is a mixture of black and white box. This happens as development moves into a crossover zone towards the end of integration and the beginning of usability.
Obviously, black box testing lends itself to closely defined test cases and rigorously defined test results. These are tests that can be performed by test techs who are capable of carefully following test plan instructions and meticulously documenting results.
White box testing is best performed by software engineers who understand how the code is written and the permutations of how it is expected to perform.
Functional Testing in an Agile Environment
The move from the waterfall SDLC to Agile development and from there to Continuous Integration and on to Dev/Ops has pushed hard on the software quality paradigm. With the move to Agile in particular, test time became viewed as a luxury and, therefore, as being somewhat dispensable. The push back on this has been brutal.
Apps released without sufficient testing were savaged by customer reviews on an Internet that makes bad news travel like a supersonic aircraft. A logical outgrowth of this situation has been the drive towards test automation and it makes a lot of sense in today’s high velocity development environments. Unfortunately, this has led to an unfounded expectation that all testing should be automated and that it will cure all development ills.
Automation works best where there is a test that is well defined and must be performed many times or requires a very complex advanced system setup. Tests that vary from release to release, require human cognition (think intuitive user interface validation) or need ad hoc variation as in exploratory testing are poor candidates for automation. Test scripts written for unsuitable tests will incur script maintenance costs that will result in their ultimate abandonment because of failures due to test design errors.
Test documentation is golden. As much as the press for quick development and short release intervals pushes quality to take shortcuts, documentation is too valuable to disregard. Test automation, where appropriate, will help document test processes as will careful completion of defect reports that document regression tests.