Skip to content

Integration Testing – The Software Meets the Road

Home » Blogs/Events » Integration Testing – The Software Meets the Road

Where unit tests are deeply involved in the functionality of individual code modules, the integration test process is focused on the system as a whole and the complete combination of those modules. It is deeply concerned with how they communicate with each other. As its name implies, integration is also the first place where the system is tested as a functional entity. It is where the code has to live up to global operational requirements that are beyond the simple functionality contained in the module specs.

What Do Integration Tests, Test?

Integration testing is the polar opposite to unit testing. Having said that, they are still complementary and the sins of one can sometimes be redeemed in the other. In addition to verifying module interactivity, it looks at the system from the user interface to the back end’s data management functions. It is the first place where the tests begin to use the code as if it were actually the intended product.

One argument that suggests itself is that, if unit testing is properly performed, why is integration testing necessary at all? https://giphy.com/embed/jEuPQNUe1xpjW

via GIPHY

For starters, the system’s essential modules are typically written by a staff of programmers. Programmers are highly skilled and very creative and they won’t all read the design specs or approach coding features the same way. This can and will lead to disconnects and glitches when their modules must talk to each other. This combines with the fact that the specs themselves are likely to change while the modules are being written, making some aspects of what the programmers wrote in the first place unnecessary.

This is reason enough for integration testing but, as the television commercial exhorts, ‘Wait, there’s more!’ Unit tests work with inter-module communications using software stubs that simulate the rest of the system. These are make-do workarounds and, when the system’s modules have to communicate: data formatting, error trapping, hardware interfaces and third-party service interfaces tend to have unforeseen issues that were not observable at the module level.

Integration testing isn’t the end of the verification process but it is a necessary step in preparing the system code for acceptance testing where it must actually perform its functions for real users and produce the desired and expected results.

Integration Test Insights

Integration test cases should focus on the data interfaces and flows between the code modules of the system. A typical case would test a data entry function in one module to see that it was correctly reflected in the data entered into a database record by another module. When successfully completed, the test has verified the process of accepting data, transmitting it and storing it across a string of code modules.

While completion of the module unit tests should be the gating condition for testing the integration of those modules, unit tests will not catch all the functional issues of their modules. Some of these will only be apparent when the module is exercised in the context of the system itself and that means that the integration tester has to be on the lookout for functional issues. As stated above, integration and unit testing are complementary but not completely independent of each other.

Integration Testing for The Agile Era

Integration testing was once a relatively isolated province of the waterfall development model. The code for a release was assembled into a system and integration tests were performed to see that it worked well enough for verification tests against the marketing product specifications. That was then, this is now.

Agile development to support Continuous Release means that modules are plugged into the system and changed on an as needed basis. This requires that integration testing be done incrementally and that it strikes a balance between the changes to related modules. Often, drivers to call specific functionality and service stubs to simulate other parts of the system are necessary components of integration as well as module testing. Once again, these two parts of the software quality process wind up being more intertwined than their definitions suggest.

Integration Testing Planning and Approach

The planning and approach to integration testing are both crucial. Test plans not only specify what is being tested, they must pointedly call out what is not tested. Listing test exclusions is vital to preparing the next phase of user acceptance testing for where it needs to look most closely. The test environment has to be carefully constructed with its facilities grown from the development system that the code was written on in order to subject the stubs and drivers that were used to re-evaluation by verifying that they stimulate the modules correctly.

Last but far from least, defect documentation is extremely critical for integration. Defects have to fully explain the context in which they were seen, the steps to reproduce them and the impact they have on system operation. These reports must go back to the programmers for bug fix work and forward to user acceptance to facilitate regression tests that make sure the fixes stay in place and work as expected.

Where the Software Meets the Road

Integration tests are designed to catch the defects that would only appear when the code is interacting with real world operating scenarios. They are critical to the success of your software and the success of your users!

Contact Us

Author:

Gary James, President/CEO

Over the last 35+ years, Gary James, the co-founder and president/CEO of QualityLogic, has built QualityLogic into one of the leading providers of software testing, digital accessibility solutions, QA consulting and training, and smart energy testing.

Gary began his QA journey as an engineer in the 1970s, eventually becoming the director of QA in charge of systems and strategies across the international organization. That trajectory provided the deep knowledge of quality and process development that has served as a critical foundation for building QualityLogic into one of the best software testing companies in the world.

In addition to leading the company’s growth, Gary shares his extensive expertise in Quality Assurance, Software Testing, Digital Accessibility, and Leadership through various content channels. He regularly contributes to blogs, hosts webinars, writes articles, and more on the QualityLogic platforms.