Skip to content

Maximizing Load and Performance Testing Efficiency in an Agile Environment

Home » Blogs/Events » Maximizing Load and Performance Testing Efficiency in an Agile Environment

A given of the Waterfall development methodology was that load and performance testing were done at the end of the development cycle. Both required the entire system to be integrated and its functionality verified in order to produce meaningful results.

Issues in performance times for a feature or glitches produced by operational degradation at less than the designed maximum number of users would start test/fix cycling until operational performance met requirements. These activities were considered to be part and parcel of User Acceptance Testing (UAT). But, that was then, and this is now. Agile has changed this game substantially.

Why Load and Performance Testing is Different in Agile

Load and performance testing in agile

The reduction of the development release cycle from months to a week or two has pushed the demand for quick performance test verification to the wall. If it is to be done at all, there has to be a highly automated, quick turn-around testing process that puts a minimum stretch on the release schedule. And notice the first part of the previous sentence.

With the endless delivery crunch of Agile, performance and, especially, load testing are often pushed right off the end of that schedule and don’t happen at all. This is a prescription for disaster on the user review front.

In tandem with creating both automated and manual functional tests while those functions are being committed to code, load and performance tests must be produced as well.

Load and Performance Testing Problems in the Agile Environment

The load test suite is typically created early in the product’s life cycle when its user interface is first operational. It will need maintenance and special situations where the user interface cannot be used must be supported during Agile sprints. This speaks specifically to exercising third party APIs at the levels of expected use.

Performance is typically dependent on the exchange of data between code modules of various descriptions. But to test it, all those interfaces have to be up and running and a body of use-relevant test data has to be ready to exercise them. Once again, the test of performance gets pushed back because the elements necessary to support it are not going to be available until the end of the development effort. This is especially true in this era of Software as a Service (SaaS).

The rapid spread of the SaaS concept has pushed the issue of unavailable system functions back into development as well.

One organization QualityLogic worked with estimated that some 40 percent of the development group’s working time was spent in trying to find work-arounds for missing software services. Building the code for a function was dependent on access to code that was being built for use by several other functions.

Developers began writing what were called ‘stubs’ to provide fixed responses that somewhat simulated the performance of the missing service code. This worked well enough that it grew into the concept and reality of Service Virtualization (SV).

Service Virtualization, Help or Diversion?

Service Virtualization is used to help enterprises reduce their code development inter-dependencies. Neither the testers nor the developers have time to create an endless array of micro-environment circumstances. To quote Steve Anderson at Clutch,

“You can instead create the service virtualization environments that are going to mimic the calls going back and forth. And you just treat them as another variation of systematic elements and micro-services.”

Agile teams can use Service Virtualization to leverage virtual services instead of pooling in resources from production that could be disrupted by the test process itself. This gives a boost to testing (especially load and performance) and development when key components are not available from the new system architecture. SV emulates the behavior of code-based services that will eventually be present in the final production system. As Agile requirements evolve, these virtual services can evolve with them.

A schedule-ridden development manager might balk at the idea of diverting precious resources to creating SV assets but that would be short-sighted.

SV service emulations can become an essential part of a reusable test and development infrastructure that greatly reduces time lost waiting for a service to be delivered by another team. Service Virtualization can make software assets available to everyone on-demand.

When It All Works

Service Virtualization can support robust application development by creating virtual SaaS assets that can be used to test system performance, load management and operational functionality in a near real time scenario. Third-party APIs are a high risk area for performance defects and load issues. This makes it especially useful for identifying and characterizing defects at the API layer where access to the actual service is either spotty or non-existent. Leverage service virtualization to save time, cost and escaped issues in the released system.

Enterprises are using Service Oriented Architecture (SOA) and Software as a Service (SaaS) to create robust applications and shorter times-to-market. Service Virtualization can make these assets available anytime, anywhere.

Author:

Gary James, President/CEO

Over the last 35+ years, Gary James, the co-founder and president/CEO of QualityLogic, has built QualityLogic into one of the leading providers of software testing, digital accessibility solutions, QA consulting and training, and smart energy testing.

Gary began his QA journey as an engineer in the 1970s, eventually becoming the director of QA in charge of systems and strategies across the international organization. That trajectory provided the deep knowledge of quality and process development that has served as a critical foundation for building QualityLogic into one of the best software testing companies in the world.

In addition to leading the company’s growth, Gary shares his extensive expertise in Quality Assurance, Software Testing, Digital Accessibility, and Leadership through various content channels. He regularly contributes to blogs, hosts webinars, writes articles, and more on the QualityLogic platforms.