Skip to content

Performance Testing: 4 Powerful Insider Tips

Home » Blogs/Events » Performance Testing: 4 Powerful Insider Tips

The terms ‘Performance Testing’ and ‘Load Testing’ are commonly confused. Load tests examine the overall system’s abilities to service user activity for large numbers of users up to, and exceeding, usage expectations. Its point is to verify that, when the expected customer base begins logging onto the system en masse, it will service their requests in an accurate and timely manner.

While this encompasses some aspects of performance verification, it is much more of an overview of how the system segments operate together.

What is Performance Testing?

Performance testing is aimed at optimizing the operation of the different aspects of the system so that, when they are all linked together, they offer the best user experience possible.

It would be extremely convenient if every aspect of computer software and hardware had performance standards for the various applications where they are used. This is seldom the case. More often the process of defining performance test cases requires a knowledgeable quality engineer who understands use cases and how to verify performance against them. It is critical that this engineer understand the different aspects of the application’s operations to set practical performance test parameters.

Performance Test Targets

Algorithmic code is composed of the calculations, business rules and data handling instructions defined in and used during the system’s operation. In addition to proper functionality and I/O operation, its execution speeds impact overall system responsiveness. While adjacent modules will influence the performance of code by throttling its data flow, the code itself should be tested to verify that it has not become a bottle neck. If its associated external functions can be stubbed or ignored, the best place to do this is in module development where functionality must be checked as well.

Hardware functioning has to support the code and much of this is dependent on how well the code manages its use of the hardware. Any aspect of the system code that governs the use of a hardware aspect must be examined for performance issues. These can show up not only as problems with a module, which are easy to detect, but also as general issues with that hardware subsystem as the defective module slows other modules queued for using that hardware element.

Input/output to/from any aspect of the system and especially between business functions should be tested for performance slowdowns. Any observable point where data is transferred should be instrumented to verify information flow. If flow through a system is extremely variable depending on the functionality currently being performed, test it across as wide a range as possible with an eye to having the system subdivided for better efficiency.

What to Look for During a Performance Test

As with any technology, performance testing has a few non-obvious implications. Primary among these is the fact that module interconnection makes performance result troubleshooting a challenge.

1. Start Performance Testing Early

For this reason, performance tests should start during initial system creation while the individual modules are being functionally verified. The modules themselves are set up in test jigs that substitute all relevant inputs with known, controllable values so that the location of a problem that affects performance is easier to detect. While load tests are done after completion on a full up system, performance tests should start well before release.

2. Monitor Tests Carefully

Monitor system CPU loads, memory access rates, storage access times and network latency during all performance tests. If the test requires parts of the system other than the one under test to be operational, it is vital that the hardware functions supporting code operation be watched carefully. An issue in the code module’s performance may lie in an apparently innocuous operation that it performs with a piece of hardware that, for some reason, isn’t ready for it.

3. Leave Monitoring Software In Place

A great deal of performance testing is done via the use of purpose-written monitoring code. This software is created solely to report performance parameters from the module it is installed in and it is typically removed before system release. It is a good idea to leave monitoring software in place with soft switches that route around it. This way, changing the coded value of a logic flow parameter can enable the monitoring function for future use.

4. Graceful Recovery

Test performance against edge conditions as well as normal operation. Where functional parameters govern the operation of a module, test across their normal span and slightly outside it as well. There should be some out of bounds tolerance in the module and a graceful, recoverable failure when it is exceeded.

Performance Testing Can Be Tricky

Performance testing encompasses an array of subtleties that, if not carefully verified, can cause a unidirectional accumulation of delays that turn into a frustrated user and poor product/service reviews.

Need performance testing services?

Contact Us


Gary James, President/CEO

Over the last 35+ years, Gary James, the co-founder and president/CEO of QualityLogic, has built QualityLogic into one of the leading providers of software testing, digital accessibility solutions, QA consulting and training, and smart energy testing.

Gary began his QA journey as an engineer in the 1970s, eventually becoming the director of QA in charge of systems and strategies across the international organization. That trajectory provided the deep knowledge of quality and process development that has served as a critical foundation for building QualityLogic into one of the best software testing companies in the world.

In addition to leading the company’s growth, Gary shares his extensive expertise in Quality Assurance, Software Testing, Digital Accessibility, and Leadership through various content channels. He regularly contributes to blogs, hosts webinars, writes articles, and more on the QualityLogic platforms.