Today’s website development, and ecommerce in particular, is driven by marketing schedules. The point of the exercise is to get market campaigns, new site features, new products/services up on the site and available to the customer base as soon as possible. In the service of this push to shorten release cycles from the months of the waterfall methodology to the days of Agile development velocities, test automation has been put forward as the solution to everyone’s problems.
What could make more sense than having a machine test a machine? And, if the customers were machines, automating website testing would likely live up to those expectations. However, they aren’t and it doesn’t. Automation does well in many test situations, but it is not the cure for everything that ails software QA and, it has its challenges.
1. Maintenance vs. Rapid Code Changes
Automation works best where it has to deal with minimal code base changes. Writing test automation scripts requires the same skill set as developing functional product code. It also requires a mindset that appreciates the concepts that form the basis of testing code structures versus creating them. This means that engineering management will always be pulling resources away from automated test script development, and especially maintenance, to meet demanding development schedules.
And maintenance is a primary driver of test automation work. Websites are composed of software that can be changed as fast as a developer can code. This supports the rapid movement of marketing and product management’s ideas to the production site. It also means that those automated test scripts have to change with the code or they will begin rendering false test failures that will ultimately cause them to be abandoned.
[clickToTweet tweet=”The touchstone of a successful test automation program is its staunch devotion of personnel resources to test script maintenance.” quote=”The touchstone of a successful test automation program is its staunch devotion of personnel resources to test script maintenance.”]
2. Sanity Checks and Regression Tests
If the maintenance challenge begins to sound insurmountable, wait, there’s more. Regression testing is the verification that a particular feature addition/bug fix remains in place in the system code and is still effective. It is also used to make sure that new changes haven’t combined with the old ones to cause defects that aren’t the fault of either code change but result from an unfortunate interaction between them.
Regression testing is as necessary as system functional verification and is actually a good candidate for automation as its goals are very predictable. Assessment of regression testing results however requires human interaction to determine the reality and severity of such defects as they may just be changes in operation rather than problems.
One way to reduce the work time invested in repetitive regression testing is to implement one or more sanity checks. Sanity checks are intended to touch every part of the system to verify that they are all still present and at least minimally functional. Since a common issue with a code conflict is that some feature stops working, this is a good way to check that the latest features/fixes haven’t broken anything that they shouldn’t have touched at all.
And a sanity check should take no more than a few hours to run yielding results that are quick go/no-go indications.
3. Framework Building/Selection
Just as code developers use development systems and frameworks to provide programming tool sets and short cuts, test automation has access to an array of test system frameworks. Like their development counterparts, automation frameworks are well suited to some development/test environments and poorly to others. This element of test automation has become so contentious that many companies have chosen to create in-house frameworks that are directly suited to their uses.
Two other aspects bear on selection or development of a test automation framework. One of these is mobile support. A significant portion of the customer base can be expected to access the site from mobile devices. This means that the user experience has to be verified by a test framework that supports the plethora of device resolutions and OS variations. This also bears heavily on the maintenance issue described above.
The other aspect is support of complex test setups. A website is typically a user-facing front end to a complex system of load managed websites, business processes implemented in middleware and back end functions such as databases, content management and product/service delivery systems. While they should be tested independently, ideally all these pieces will also be tested from the user controls on the site. This means setting up database record contents, preparing dummy credit card usage capability and code stubs for services that can’t be easily set up for tests. The selected test automation framework needs to support these complex setups and make them easy to reset and replicate.
What Shouldn’t Be Automated?
Test automation offers the seductive lure of being able to write a test script once and then run it again and again at the push of a button or, better yet, tie its execution to the output of a build management system. Out of this, a conventional wisdom has grown up regarding automating website testing that says more automation is always better. As attractive as this concept is, it is not necessarily true. Implementing test automation on sites with frequent and significant content changes can quickly reach a point of diminishing returns.
The cost of maintaining test scripts and dealing with the chaos created by false positives quickly exceeds the cost of simply using manual resources to perform tests of system code that typically has many changes. The best ROI resides in finding the right balance between automated and manual testing with both tailored to fit the development cycle needs of the organization.