Skip to content
QualityLogic logo, Click to navigate to Home

Why Test Automation Initiatives Fail: Advice from a 30-year-old Software Testing Company

Home » Blogs/Events » Why Test Automation Initiatives Fail: Advice from a 30-year-old Software Testing Company

Anyone who has been in the software testing field for more than a few years has seen a test automation effort fail. The automation effort starts out with great optimism that the selected automation tool will work wonders and in short order all the tests will run with the push of a button. However, the tools don’t quite work as advertised, the automation is a bit trickier than expected, and everything about automation takes longer than estimated.

A (Not So) Hypothetical Test Automation Failure

Automated tests are rolled into the release process once complete. These automated tests start generating false failures due to changes in the GUI. This causes some test automation team members to get diverted to help fight these false failure issues, slowing progress down further. Next, a few test automation team members quit and maintaining their code becomes quite difficult. Management starts to get disillusioned at the slow progress. Developers get tired of the false failures and want additional proof before they will fix bugs caught with automation. The whole thing spirals downward from here. At some point, perhaps because of a budget crunch or the loss of an automation champion in management, the plug is pulled on the automation effort.

Although this sad scenario in not untypical, it doesn’t have to happen to you. QualityLogic has 30 years of experience helping customers successfully test their products and we would like to share with you some of our automation testing wisdom.

Why Companies Pursue Test Automation

There are many good reasons why companies want to start automated testing. There are potentially large costs saving over manual testing. Release cycles are getting shorter, so testing must be accelerated. Perhaps the most compelling reason is that test automation improves product quality by catching bugs earlier in the process. A set of manual tests may take a week to run, the same tests automated could be run once or twice a day.

Types of Tests to Automate

As we’ve shared, automation testing is a critical component of a successful QA program. When we evaluate testing programs for automation, we typically consider the following types of tests for automation:

  • End-to-end testing: This is similar to unit testing but tests broader components of the code by assembling and checking them together.
  • Acceptance testing: Automated acceptance tests focus on the software’s behavior and ensure that behavior meets the business requirements and functions as expected by the developer teams.
  • Performance testing: Automated performance tests check to see if the software meets non-functional requirements, such as running continuously without crashing or handling multiple users simultaneously.
  • Stress/load testing: This type of testing involves automating the process of putting stress on the software to identify its breaking points. No one wants to successfully launch a product only to have it crash because everyone is using it.

Types of Test Automation Tools and Their Effectiveness

There are a wide variety of test automation tools with varying degrees of effectiveness and requiring differing skills. Automation tools can be roughly categorized in to the following general areas:

  • Dumb Record & Playback Test Tools – A brute force recording of user interactions with the application. Test scripts are very fragile and break with the slightest change in the application.
  • Smart Record & Playback Test Tools – More adaptive recording of user interactions with the application, storing multiple object identifiers and leveraging machine learning. Able to adapt in a limited fashion to application changes without breaking the test script.
  • AI Assisted Auto Discovery & Playback – Self-discovery of paths through the application using reinforcement learning, with the ability to playback any of the discovered paths. Able to adapt in a limited fashion to application changes without breaking the test script.
  • Abstract Syntax Test Tools – Use of natural language, keywords, or procedural text (think Cucumber/Gherkin) to define test cases, with the underlying automation code driven by the abstract test definitions. In some tools, the automation code triggered by the abstract test definitions must be hand-coded and in other tools some helper routines deal with more common scenarios that can be inferred from the application objects.
  • Hand-Coded Test Development – Use of common programming languages to define automated test cases using an underlying automation API such as those supported by Selenium, Appium, or mobile device operating systems.

While there is a lot of excitement around the AI-enabled testing tools, particularly those that can auto-generate test scripts, these tools work best with applications whose application logic is relatively simple. In our experience, applications with more complex application logic require hand coding of automated test cases to fully test the application logic.

14 Test Automation Best Practice Guidelines from the Pros

The best practices recommendations below focus on the development of hand coded automated functional tests used for new feature validation and regression testing.  Functional test automation of applications containing complex application logic is typical of most automation projects that QualityLogic has done over the years.

Ensure Management Commitment

Test automation can generate a huge ROI, but it takes time. Management must be committed to the effort and their expectations must be carefully calibrated. Automation is not a closed ended project, but rather a more fundamental change in how the testing aspect of software development is done.

Staff Skills Needed for Test Automation

Programming skills are needed for most test automation efforts. Senior test developers can code reference tests for various test classes, then more junior staff can use those reference tests as a guide for derivative tests. Most organizations select a specific language for test case development and it is prudent to have potential test developers demonstrate their skills in the selected language before being added to the team.  

Industry-wide, Java and JavaScript are the most popular for test automation projects, however QualityLogic’s customers have been more frequently using C# or Python. In theory, Python is a friendlier language for more junior developers which may be a consideration when selecting a language.

Staff Experience   

Having staff on the test automation team that have been there, done that is huge. Many companies leverage outside software testing companies, like QualityLogic, to provide their internal teams with the necessary expertise and experience with test automation.

Domain Knowledge for Test Automation Efforts

Developers can be more effective if they have hands on knowledge of the product they are automating and have a sound understanding of the domain within which the application is used. This knowledge is typically picked up on the fly during test automation development. Without this familiarity, test developers may do what you ask but not what you want, as they won’t “see” problems that are obvious to anyone familiar with the application or domain.

Staffing Your Test Automation Effort

The automation effort must be adequately staffed both for the active test automation development phase and for ongoing test maintenance. There should be enough resource redundancy such that the program doesn’t fail if a test developer leaves the team.

Deciding What Test Cases to Automate

Priorities for test automation should go to tests you want to run every build, tests that need to be run across multiple platforms, and tests that are time consuming when doing manual testing. A good place to start for many organizations is automating your build release smoke test. Other criteria could include how easy the tests are to automate (get a quick success), avoid automating test cases with unpredictable results, and automate test cases with the most frequently used functionality.

Test Automation Tool Selection

The first order selection criteria are whether to go with a commercial tool offering or open source tools. Commercial test automation tools from big players like Tricentis are robust, but very expensive. Commercial tools test tend to be relatively easy to use, simplify the test creation process, have training and support for their products, and tend to be less buggy than their Open Source counterparts. 

Open Source automation tools are free, have supportive user communicates, and some like Selenium have become de facto standard test tools. Multiple Open Source tools may need to be used in concert for a given automation solution and integration can be challenging. Most of QualityLogic’s test automation engagements use open source tools. Customer motivations are varied, but in general they do not like the idea of being locked into a single vendor for a critical part of their development infrastructure.

Test Design

It is important to start with a good manual test case, then automate it where possible. Clear guidance on test intent, preconditions, user actions, and expected results are critical. Using an abstract syntax like Gherkin can help provide clarity as to test intent but does add another layer of abstraction to the test execution process.

Development Priorities

Showing automation results quickly is a huge confidence builder for both the automation team and management. At QualityLogic we use AI-based similarity analysis of manual test cases to identify opportunities to build libraries for common user interactions and to identify the order in which to approach coding of test cases to maximize development progress. The techniques can be grouped into the following approaches:

  • Isolation of globally common code test sequences whose functionality can be automated as part of a common code library.
  • Clustering similar test cases for assignment to the same programmer.
  • Predictive ordering of test cases for development to maximize code sharing between similar  test cases

Coding Best Practices for Test Automation

Organizations should develop coding best practices for test automation including templates that guide the test developers’ efforts. Key elements of an effective set of best practices include:

  • Test case naming
  • Object location strategies (use more than one)
  • Hard or soft asserts
  • Wait handling
  • Page object pattern usage
  • Data driven test inputs
  • Minimal dependencies with other test cases

Automated Test Scope

Automated Tests should focus on a clear and narrow objective, typically replicating a specific typical user action.  Tests with a narrower scope are easier to code, easier to debug, and easier to maintain.

Source Code Control

Test automation is software development. Robust source code management is a must using platforms like GitHub.

Dealing with False Failures

Living with false failures is a fatal mistake for automation efforts. If a test is flakey, pull it out of the daily automation runs until it is fixed. If development is not going to fix certain bugs, pull the test cases that trigger those errors from daily runs. You must preserve an environment where an automation run failure is a red flag to everyone on the team. False failures or failures that are ignored poison that assumption.

Test Automation Framework

A test automation framework should be put in place. This is more than just the selected automation tool. It includes common resources and libraries that test cases leverage, integration with build processes, issue trackers, and other parts of the software development infrastructure. This allows the test developer to focus on coding test cases knowing the framework will take care of the execution and reporting details.

(By the way, if you’ve used Protractor, Google’s automation framework, check out our Protractor migration services. With the deprecation of the framework, using an alternative like Playwright or Cypress is critical.)

Partnering with QualityLogic for Your Test Automation Effort

Partnering with a software testing company like QualityLogic, with decades of experience automating software testing, can help ensure a successful outcome for your automation efforts. Whether we take on the whole automation effort or integrate into your existing test development teams, we are confident that our skilled and experienced test automation engineers can have a dramatic impact on your automation efforts.

Contact us today!

Author:

Gary James, President/CEO

Over the last 35+ years, Gary James, the co-founder and president/CEO of QualityLogic, has built QualityLogic into one of the leading providers of software testing, digital accessibility solutions, QA consulting and training, and smart energy testing.

Gary began his QA journey as an engineer in the 1970s, eventually becoming the director of QA in charge of systems and strategies across the international organization. That trajectory provided the deep knowledge of quality and process development that has served as a critical foundation for building QualityLogic into one of the best software testing companies in the world.

In addition to leading the company’s growth, Gary shares his extensive expertise in Quality Assurance, Software Testing, Digital Accessibility, and Leadership through various content channels. He regularly contributes to blogs, hosts webinars, writes articles, and more on the QualityLogic platforms.