Software systems have had at least some need to share data and status (interoperate) with each other for over half a century.
For humans, interoperability has typically been a matter of clarity and complete content in written and spoken communications. As situational stress mounts, managing both of these aspects becomes more and more challenging until the interactions that the communications were supposed to support begin to disintegrate. A simple example of this is giving orders to a military unit marching on a parade ground versus in the middle of a battle.
Systems interoperability used to be confined to mechanical and electrical qualities that could be easily and directly measured. Was the width of the strut the correct size to fit in the hole provided for it? Was the electrical power the right voltage and frequency for the motor that was connected to it?
Extending this concept to software is vastly hindered by the fact that software interactions are not visible to the naked eye. There is no simple way to directly observe whether the interacting processes are proceeding correctly until a visible/measurable output is emitted by the system. Unfortunately, this typically happens after numerous software modules have linked and exchanged control and data messages with each other. All this makes tracking down the causes of interoperability problems a substantial effort.
Interoperability Testing: What and Why?
The interoperability issues within a specific software application are resolved during the integration of its component modules during the development process. A much more common issue for software quality is testing the interoperation of software applications that are expected to use a common communications protocol.
Software that is expected to communicate over an Ethernet network will commonly use the Transmission Control Protocol combined with the Internet Protocol, more commonly known as TCP/IP. An email client program might need to connect with an email server using TCP/IP and will rely on the driver software for the client’s host computer to provide this interface as will the email server software in its host.
Note that we have the following software connection chain: Email client to network interface driver to network hardware to another network interface driver to email message server software. An issue with anyone of these linkages can degrade or stop operation of the entire system array.
Interoperability testing is performed at each of these application to application junctures to verify that the software on either side of each link correctly employs the protocols that govern that link.
It is important to verify each link independent of the others to keep the causes of functionality issues from disappearing into a complex web of near simultaneous connections. For example, verifying the connection between the email client and the network connection driver makes sure that it can correctly handle all the standard communications this link requires and deal with any expected error recovery actions. The same holds true for each of the other links.
It is important to note that most communication and control protocols have built in provisions for error handling. Expecting everything to go smoothly without problems is not realistic and modern protocols are prepared to recover from several common communication missteps.
Both the functionality of the expected span of communications and the complete range of error correction capabilities must be verified in each software application that uses the protocol.
One of the touchstones of this definition of communication by protocol is that any application that complies with the rules of the protocol is, by definition, interoperable with any other application that complies with it. The problem with this is that the protocols themselves are written with timing and operational tolerances to facilitate combining dissimilar software and hardware systems. This can lead to implementations of a protocol that have issues with other implementations even though they all comply with the letter of the rules laid out in it.
The upshot of all this is that interoperability testing must verify that the tested interactions both comply with the protocol they are expected to use and do not bend those rules defined in the protocol beyond the point at which functionality is degraded.
An example is in order. Facsimile, better known as fax, has been in wide spread use for nearly a century. The companies that created individual proprietary fax communication protocols agreed upon the ITU-T T.30 protocol to govern the interoperability of fax terminals over 40 years ago. There are now in excess of 100 million fax machines in the world-wide installed base of telephony equipment.
From the above a reasonable assumption is that the T.30 fax protocol is well defined in both documentation and common usage and interoperability issues do not arise around the protocol’s implementation. And yet, interoperability between devices utilizing the T.30 protocol is still an ongoing issue. A given of implementing a T.30 interface is the necessity of extensive interoperability testing to verify that all the other devices (or at least a majority of them) will work with yours.
The process of performing interoperability tests:
- Research the protocol to document use and error recovery cases. For fax, this means creating test cases for all the possible paths through the T.30 call negotiation flowchart.
- Create an ability to exercise the Device Under Test (DUT) through all these test conditions. To test fax, this involves creating or purchasing a system that can control all the T.30 call parameters.
- Execute the necessary test suite observing and recording the exercise of all test parameters and, for fax, call success/failure symptoms.
- Analyze test results to isolate both common and unique causes for test failures and operational degradation symptoms. Did the fax calls disconnect in the middle of either fax negotiation or page transfer, if so why?
- Assess the impact of the discovered operational issues and whether or not they are worth the engineering and development costs of correction. Is the system’s ability to make a fax call and transfer the pages sufficiently injured that the problem has to be fixed.
This process applies to all interoperability tests. They are all designed to make sure that the systems under test will work with each other in support of, or in some cases in spite of, the protocols intended to govern their communications.