For years, fax device manufacturers have pondered the question of how much accommodation they should make for non T.30 compliant devices. Too little and customers complain about inability to send their pages, too much and development costs soar.
The roots of facsimile becoming the commodity service it is go back to the late ’60s and early ’70s. The representatives of the then completely proprietary fax industry met to discuss the novel idea of interoperability between their products under the auspices of the CCITT. Their respective companies were savvy enough to realize that the appearance of integrated circuits on a chip meant that a whole realm of consumer electronics was about to open up. If they could send pages to each other, these companies stood to sell a great many more fax terminals that they could if they could only communicate with their own equipment. Out of this realization, they set about hammering out an ‘official’ facsimile protocol. From this beginning T.30 was born and, from that beginning, a controversy was ignited.
The Birth of the T.30 Fax Protocol
As in all basically political exercises, making T.30 palatable to its creators was a matter of compromise. Signal durations, content and intervals were specified with large tolerances and provision was even made to exit the T.30 protocol once the call had started and make the actual page exchange completely proprietary again through the Non-Standard-Facilities (NSF) feature. Once the manufacturers began making fax terminals that were ‘compliant’ to this new protocol, proprietary design traditions began to re-assert themselves. Over the intervening thirty years, T.30 has become more of an approximation of how fax should work than a specification. CCITT, now renamed the ITU, correctly identifies T.30 as a ‘recommendation.’
A discussion of the vast array of protocol divergences common in modern fax terminals is considerably beyond the scope of this article. So, I’ll focus on one signal as a talking example, TCF. The Training Check Field (TCF) was intended to be a connection quality test. Once the PSTN phone call had been placed and answered and the participating fax terminals had exchanged capabilities, the originator of the call sends the TCF as a transmission of 1.5 seconds of ’0 bits using the agreed upon modulation and data rate. The point of this exercise is for the answering terminal to examine the received TCF to see that, indeed, there are 1.5 seconds of ’0′ bits there without any ’1′ bit interruptions of that span. This was considered to be adequate proof of the connection’s ability to carry an encoded page image at that modulation and data rate.
Now before we go there, I am aware that TCF is not used by fax transmissions employing the V.34 modulation. The introduction of V.34 to fax has produced its own interesting set of interoperability challenges and I will address some of those in future articles. For the purposes of this one, I am looking at the V.17 and earlier modulation calls as still being the most common mode of page transmission in the fax industry given the slow turn over in the installed base of terminals. This is likely to be so until the majority of Voice over IP (VoIP) gateway manufactures begin to support version 3 of T.38 and V.34 really takes to the switched packet networks.
Training Check Field or TCF
Going back to TCF, T.30 specifies it as 1.5 seconds of modulated ’0′ data +/- 10% or 150 ms. Taken at face value, a ‘good’ TCF should be demodulated in the answering terminal as between 1350ms and 1650ms of uninterrupted ’0′s. The first common departure from this is that many fax terminals (some sold in very large numbers) will accept as little as 500ms and as much as 2 – 3 seconds of ’0′s as a valid TCF. Though it begs the question of whether a too short TCF shouldn’t require a drop back in data rate, this isn’t too big a problem as the departure from the specification covers its tolerance span. Of greater concern is the matter of how the TCF data is parsed in the first place. T.30 doesn’t even address this issue.
A TCF transmitted by the average fax machine starts with a short string of scrambled data. This is possibly the input buffer of the modulating device (modem chip or DSP) not having a stable ’0′ bit input before the modulating process begins. No matter, the output quickly settles down to make the requisite ’0′ bit signal. However this TCF has to go through a VoIP gateway, be turned into T.38 packets, and then be transmuted back into a modulated analog TCF. Since the IP connection is linking two somewhat simultaneous analog fax calls, the receiving gateway will typically start sending ’0′ bits to the answering terminal before receiving the entire TCF from the emitting gateway which may still be receiving it from the analog originating terminal. Use the diagram below to follow all this.
Now we have the gateways getting into the act of demodulating and then re-modulating the TCF with the result that our 1.5 seconds of ’0′ bits now has scrambled data from the start of the originator’s signal preceded by ’0′ bit data stuffed in to its beginning by the receiving gateway as it tried to avoid the answering terminal’s time out function. The answering terminal looks at this ’0′s / ’1′s / ’0′s data and has to choose which group of ’0′s it is going to address with its requirements for a ‘good’ TCF. If it looks at those added by the receiving gateway, it probably won’t see enough to avoid issuing a Failure To Train (FTT) causing a return to the data rate negotiations. Sending the next TCF over the same circuit will likely result in the same FTT response ultimately disconnecting the call before a single page can be transmitted.
Fax Conformance vs. Interoperability from the T.30 Stand Point
Now we come to the crux of the question. How do the manufacturers of terminals that have to cope with transmissions through FoIP gateways cope with this problem? If they control development costs by making the least complex interpretation of T.30 and looking at only the first run of ’0′s, their terminal will not work with these gateways… ever. If they want to be truly interoperable and can stretch their development budget to accommodate that wish, their T.30 implementation can parse through the first run of ’0′ bits and the scrambled data bits until it finds the real TCF run of ’0′s. When simple compliance isn’t enough, interoperability requires reaching beyond the specification and creating a truly robust signal parsing and analysis capability… or does it?
An increasing number of manufacturers are looking at this question of fax conformance vs. interoperability from the stand point that T.30 was written for a phone system that no longer exists. They reason that, in the 21st century, phone calls flow over digital trunks connected by modern communication systems and the noisy, distorted connections of the past are just that, history. In this brave new telecom world, why bother with TCF at all? Why not simply accept whatever shows up at the answering terminal as ‘good’ and continue with page transmission? Virtually all fax calls use Error Correction Mode (ECM) now and, if that can’t retrieve the page data, the call should simply be dropped and redialed.
So what do you think? Should fax terminal and VoIP gateway manufacturers be strict in their T.30 interpretations? Should they stretch their systems to ferret out ‘real’ T,30 signals from the worst of the protocol violators for maximum interoperability? Or should they simply cut the Gordian interoperability knot and dispense with as much of T.30′s variability as they think they can get away with?
If you’re having a fax interoperability problem check out our fax test tools and services.