Posted on

Yes, Virginia, Software Fails – a journey of software failures and testing solutions

The power and fatal weakness of software is its ultimate malleability. It can be quickly and easily changed by drastic degrees. This one inescapable aspect would be good cause to abandon software entirely if it weren’t for its extreme power to make our machines do our bidding. In this day and age, software runs nearly everything of any importance in our world.

The elemental issue with software is its ephemeral quality. Physical products have a substance which can be

examined by all our five senses to see that they meet the measure of their requirements and that the requirements themselves make sense. Software has no such constraints.

Software is composed of millions of lines of text that must be written with ultimate precision. The slightest coding mistake can have massive repercussions when the system is used. Back in the 1960s, a satellite launch was turned into a fiery explosion by the absence of a comma in a line of code.

software fails - importance of software testing
Back in the 1960s, a satellite launch was turned into a fiery explosion by the absence of a comma in a line of code.

Something so impactful must be carefully designed, but design work consumes time without yielding the appearance of project progress. Numerous ‘software failures’

can be laid at the doorstep of poor or insufficient design. So, given all this, software development should be a rigorous, careful, deliberate, tightly controlled process… right?

Producing Quality Software in an Agile World

The preceding sentence loosely describes what was called the waterfall software development method. Features were described in detail by marketing, turned into designs and code structures by engineering, verified by quality, fixed by engineering, and then released every couple of years. Literally, code releases came at intervals of 12 to 24 months and, even at this pace, defect-free code was a fond desire rather than a reality.

Today, the Agile methodology has won the hearts and minds of modern software development organizations. As commerce moved onto the Internet, web software became the gateway to sales of just about everything. At the very least, a current, eye-catching web site is required to announce the existence of the product or service to its prospective customer base.

This puts marketing on a bubble of making continuous changes to site code to show off new products and sales campaigns. This push to beat the competition to the web is what has propelled Agile into becoming the premier development method. Agile is predicated on releasing operational code every one to two weeks. Reducing the scope and quantity of these changes down to no more than four or five per release is supposed to allow for a series of small feature additions/bug fixes to be propelled into the functioning system code very quickly.

Sounds good doesn’t it?

The reality is that the constant push toward what is called Continuous Release — releasing each change as it is made — has resulted in the abrogation of most of the controls on software development. The intrinsic re-usability of code has led to the creation of what are called frameworks which are simply tool boxes of software modules that can be tied together to perform various common software processes. These frameworks are so useful that development organizations have standardized them, and they have become commercially available development tools.

So, we have a process that focuses tightly on a few system aspects to release incremental changes rapidly using pre-fabricated code building blocks as much as possible. What could go wrong?

Sanity Checks for Quality Software

Design

It is not enough for the code to follow the design, the design itself must be carefully considered and explained to the programmers. It is vital for the product managers to attend the Agile planning meetings to make sure that both development and quality understand what the intention of the new feature is and, especially, how it should work for the user.

Schedule

Market windows may wait for no one, but there still has to be time allocated for the entire development process. Non-negotiable deadlines are the arch enemies of good design, thorough documentation, and careful verification. Being first to market with a product or service that doesn’t work is not a win.

Budget

Software development takes longer and costs more than anyone thinks it should. This should be a given in the planning of any software development project. Low cost estimates or quotes for any project activity or tool set should be instantly suspect. If the project budget is so tight that it has no flexibility, perhaps the project itself needs to be reconsidered.

Documentation

Unfortunately, documentation has become the bane of the Agile methodology. It is typically viewed as useless history, the creation of which consumes time and resources better used for development work. It is still a necessity to leave a trail of what was done and why in order to facilitate the next time that code has to be modified. In particular, defect reports have to be detailed and contain exact descriptions of how to replicate symptoms.

Management

Quality used to be the conduit of information flow between product management and code development. Agile has shifted this responsibility to product management. Those responsible for the final product have to stay on top of getting the design right, communicated to engineering, and properly understood by those testing the results. With Agile, the opportunity for corrective guidance is a very narrow window that management must seize and use to fullest effect.

Verification

A perception exists that QA is an expense rather than a revenue-generating investment. That may be the accounting take but sacrificing testing to meet a market window is a prescription for disaster. With Agile, quality has truly become everyone’s responsibility. As such, it must be planned for in each sprint and any test plan, automated test script or sanity check must be updated in order to maintain its relevance. Development and management must become the direct allies of quality working together to see that the release meets the design, performs the function fully, doesn’t damage other parts of the system, and passes use acceptance as a working addition to the system.

Yes, Virginia, Software Fails But All is Not Lost

Software is a mutable solution to a vast array of challenges. It is easy to create and easy to change. But because of its power, it requires attention to detail like nothing else in our world. Development organizations need to pay close attention to process as they plunge into the break-neck pace of Agile and Continuous Release.

Posted on

Integration Testing – The Software Meets the Road

Where unit tests are deeply involved in the functionality of individual code modules, the integration test process is focused on the system as a whole and the complete combination of those modules. It is deeply concerned with how they communicate with each other. As its name implies, integration is also the first place where the system is tested as a functional entity. It is where the code has to live up to global operational requirements that are beyond the simple functionality contained in the module specs.

What Do Integration Tests, Test?

Integration testing is the polar opposite to unit testing. Having said that, they are still complementary and the sins of one can sometimes be redeemed in the other. In addition to verifying module interactivity, it looks at the system from the user interface to the back end’s data management functions. It is the first place where the tests begin to use the code as if it were actually the intended product.

One argument that suggests itself is that, if unit testing is properly performed, why is integration testing necessary at all?

via GIPHY

For starters, the system’s essential modules are typically written by a staff of programmers. Programmers are highly skilled and very creative and they won’t all read the design specs or approach coding features the same way. This can and will lead to disconnects and glitches when their modules must talk to each other. This combines with the fact that the specs themselves are likely to change while the modules are being written, making some aspects of what the programmers wrote in the first place unnecessary.

This is reason enough for integration testing but, as the television commercial exhorts, ‘Wait, there’s more!’ Unit tests work with inter-module communications using software stubs that simulate the rest of the system. These are make-do workarounds and, when the system’s modules have to communicate: data formatting, error trapping, hardware interfaces and third-party service interfaces tend to have unforeseen issues that were not observable at the module level.

Integration testing isn’t the end of the verification process but it is a necessary step in preparing the system code for acceptance testing where it must actually perform its functions for real users and produce the desired and expected results.

Integration Test Insights

Integration test cases should focus on the data interfaces and flows between the code modules of the system. A typical case would test a Integration Testdata entry function in one module to see that it was correctly reflected in the data entered into a database record by another module. When successfully completed, the test has verified the process of accepting data, transmitting it and storing it across a string of code modules.

While completion of the module unit tests should be the gating condition for testing the integration of those modules, unit tests will not catch all the functional issues of their modules. Some of these will only be apparent when the module is exercised in the context of the system itself and that means that the integration tester has to be on the lookout for functional issues. As stated above, integration and unit testing are complementary but not completely independent of each other.

Integration Testing for The Agile Era

Integration testing was once a relatively isolated province of the waterfall development model. The code for a release was assembled into a system and integration tests were performed to see that it worked well enough for verification tests against the marketing product specifications. That was then, this is now.

Agile development to support Continuous Release means that modules are plugged into the system and changed on an as needed basis. This requires that integration testing be done incrementally and that it strikes a balance between the changes to related modules. Often, drivers to call specific functionality and service stubs to simulate other parts of the system are necessary components of integration as well as module testing. Once again, these two parts of the software quality process wind up being more intertwined than their definitions suggest.

Integration Testing Planning and Approach

The planning and approach to integration testing are both crucial. Test plans not only specify what is being tested, they must pointedly call out what is not tested. Listing test exclusions is vital to preparing the next phase of user acceptance testing for where it needs to look most closely. The test environment has to be carefully constructed with its facilities grown from the development system that the code was written on in order to subject the stubs and drivers that were used to re-evaluation by verifying that they stimulate the modules correctly.

Last but far from least, defect documentation is extremely critical for integration. Defects have to fully explain the context in which they were seen, the steps to reproduce them and the impact they have on system operation. These reports must go back to the programmers for bug fix work and forward to user acceptance to facilitate regression tests that make sure the fixes stay in place and work as expected.

Where the Software Meets the Road

Integration tests are designed to catch the defects that would only appear when the code is interacting with real world operating scenarios. They are critical to the success of your software and the success of your users!

Posted on

7 Awesome Exploratory Testing Tools

Exploratory testing is defined as simultaneous learning, test design, and test execution and the heart of manual testing. Exploratory testing values the tester as an integral part of the test process, like the Agile Manifesto states: “Individuals and interactions over tools and processes.”

In a previous post, we discussed how a tester can express their inner creative while ‘bug-hunting’, but we didn’t cover the many handy (read: awesome) tools that facilitate organizing, recording, and documenting during the hunt. Let’s do that now.

exploratory-testing-conceptExploratory Testing – a Recap

First, a recap of exploratory testing. Exploratory tests are a manual method of exploring an app without test scripts or predetermined input definitions. Instead of following the user ‘happy path’ typically covered in scripted testing, the tester imagines alternate paths where users do unexpected things. This could be repeated button-clicking, following a non-standard flow, or inputting non-spec data. The unexpected could also be less obvious error-inducing concerns, such as configuration requirements or structural module deficiencies.

During an exploratory test a tester needs to remember what they did to cause an error, and what the results of the error are. This remembering happens through detailed note taking. With proper documentation, exploratory test notes can become future test cases — a double win for both the tester’s personal experience and future iterations of the software. Also, testers need to think of ways to design tests to return actionable data, and to explore issues deeper. These two areas, documenting actions and devising tests, are where the awesome tools come in!

Awesome Tools for Exploratory Testing

Unfortunately, no one tool can be everything for the exploratory tester. But, given each testers’ individual workflow preferences, there are tools in several categories to consider. First, let’s look at three tools for devising tests. They are heuristics, personas, and mind maps:

1. Heuristics:

During exploratory, the tester designs and executes tests in real time. To do so, a tester must organize their inquiry to devise worthwhile tests. One way to devise tests is with heuristics and mnemonics.

A heuristic is a mental shortcut — a cognitive-load reducer — based on past experience and deriving from the Greek word that means ‘to find’. Heuristics are unconscious ways to quickly process information, also known as a ‘rule-of-thumb’. For example, the idea of looking under a welcome mat to find a key is a heuristic. This is an example of a ‘representative heuristic’.

In contrast, a mnemonic is a word, rhyme, or other memory aid used to associate complex information with something that is easy to remember. For example, I might remember Rob by linking his name to Rob’s red hair. Then, I remember ‘Red Rob’ without a struggle, and can greet him by his name when I see his hair. Heuristics and mnemonics help us solve problems and find answers efficiently under cognitive load.

Designed by James Branch of www.satisfice.com as both a mnemonic and a heuristic for exploratory tests is SFDPO. This translates verbally to ‘San Francisco Depot’, or Structure, Function, Data, Platform, and Operations. Each word represents a different aspect of the software to explore during testing. By thinking of software from these POV’s, we can devise interesting and profitable tests.

Here’s a breakdown of SFDPO:

  • Structure (what the software is): What are its components?
  • Function (what the software does): What are its functions from both the business and user perspective?
  • Data (what the software processes): What input does it accept and what is the expected output? Is input sequence sensitive? Are there multiple modes-of-being?
  • Platform (what the software depends on): What operating systems does it run on? Does the environment specify a configuration to work? Does it depend on third-party components?
  • Operations (how the software will be used): Who will use it? Where and how will they use it? What will they use it for?

2. Personas:personas as an exploratory testing tool

Personas are a tool that helps the tester adopt the habits (and feelings) of different types of users. During exploratory testing, personas help us to discover different types of problems resulting from different types of user behavior. For a great overview of personas, and how they inform exploratory testing, check out this blog post by Katrina Clokie.

Mind mapping as an exploratory testing tool3. Mind Mapping:

Each tester will have their personal process for setting up an exploratory testing session. It can be helpful to start with a plan of what they’d like to achieve in the session or the most important areas or requisites to review and explore. A mind map can help visualize the potential starting points, especially when combined with the SFDPO heuristic. Mind maps also help a tester circle around after following a deviation. Two favorite mind mapping tools are Xmind and MindMup.

Tools for Documenting the Exploratory Processes

Now let’s look at ways to document exploratory processes.

4. Screen Recorders:

An exploratory tester tends to be passionate about testing. As such, it’s easy to become wrapped up in the Screen recording exploratory testing toolexcitement of creating an error. Remembering the steps taken to produce an error, while overwhelmed with the excitement of finding a bug, is vital to comprehensive test documentation. One of the best, and simplest, tools to facilitate documentation after the test is a screen recorder. By simply recording the test session, a tester has a complete history of every action taken. There are many screen recorders available, but if you’re a Chrome devotee like me Screencastify is a free browser extension that offers an elegant UI and auto-saves to Drive. Awesome!

sticky notes for exploratory testing5. Sticky Notes:

Whether you prefer the stickies on your PC, or old-fashioned Post-Its, sticky notes are a great way to jot down ideas while testing. During exploratory testing it’s common for other ideas to pop up, like other test variations or different software aspects that need checked for similar issues. With an awesome note-taking setup, a tester can jot down these ideas to use later without interrupting their process.

6. Browser Dev Tools:

For web applications, the built-in browser tools are helpful. The moment something looks a bit odd, pop open the dev console to inspect an element, investigate a script flow, and so on. There’s other browser tools like Bug Magnet or Fiddler. Bug Magnet is an exploratory testing assistant for Chrome and Firefox that adds problematic values and edge cases to the context menu for editable elements, allowing easy access during exploratory testing sessions. Fiddler is another web session manipulator. Both are highly rated, but it’s up to the tester to decide which of the many browser tools available is the most useful.

7. Service Virtualization Tools:

Finally, the need to test early in the development cycle is a reality of the Agile process. So, how can a tester mock a service if the API isn’t complete yet? Virtualization tools allow service virtualization, a practice common in software testing for simulating backend and third-party systems. Here’s a list of 10 recommended service virtualization tools from Guru99.

Tools Help Optimize Exploratory Testing Time

We can see there are as many awesome tools to use during exploratory testing as there are ways for a tester to develop their personal exploratory test process. The takeaway here is that creating profitable exploratory tests requires more than random button- clicking (although that happens sometimes). Exploratory testing is a creative process, yes, but a directed and informed one.

Exploratory testing asks the tester to use an underlying framework of organized inquiry to explore the software. To make the tests profitable they must be documented. These two concepts, mapping and recording, are the backbone of exploratory testing. Combine the backbone with the creative heart, and you’ll find exploratory testing a task to look forward to.

Posted on

3 Tips to Ensure Unit Testing Success

The name, Unit Testing, accurately conveys the idea of testing something smaller than a working system. It refers to testing code modules that perform isolatable unique functions. In today’s programming parlance, these are typically called objects. They represent functionality that is specific enough to be named in a flow chart of what the system does and simplistic enough that all the inputs and expected outputs can be readily defined.

This ability to exactly define a code unit’s functionality is key. It enables what is known as ‘white box’ testing which presupposes that the tester has an intimate knowledge of what the code actually does. This knowledge supports the process of writing code to provide its expected inputs under each defined operational situation and verify that the expected output is generated.

Unit tests gate the process of integrating the code module into the overall system. A code block must pass its unit tests completely, without failures, to assure that it will contribute its functionality to the system without adding defects as well. Testing the integrated system should be focused on the interactions between modules, not the operation of the modules themselves.Unit testing software developer

Unit Testing: Developers Testing Their Own Code

A touchstone of software quality has long been the concept that a programmer should never test his or her own code. The intent behind this is simple. The person who wrote the code is the one most likely to have perceptual blind spots regarding its faults. While this idea is arguably correct, the need for an implicit understanding of the code module’s operation for unit testing overrides it.

As described above, unit testing is white box testing and is predicated on the tester knowing exactly what the code is supposed to do under all operating circumstances. The programmer who wrote it is the obvious choice for the task, blind spots or no. Unit testing requires this deep knowledge of the code and it must be performed at the stage of the production process where the code module is generated. This necessarily makes the code developer our unit tester.

Tips for Successful Unit Testing

Unit testing is where robust system design starts. Having the development engineer writing tests right alongside the code is a prompt for continuous consideration of how the code will be used and what it must do. Developers typically want to create system code rather than tests because that is what they got into the business of programming to do. An astute development manager will actively promote the idea that writing comprehensive unit tests is a basic part of writing good code. A few instances of what to look for in unit tests beyond direct functional verification are in order.

1. Error Trapping

Error trapping is the process of examining the input to a module to detect data that is erroneously formatted or is just simply wrong. It guards against errors generated in other modules and against active challenges to the system’s security. Unit tests should pit the module’s error trapping code against every permutation of bad input the programmer can devise given the inevitable development time constraints. It specifically must verify effective resistance against hacker exploits. Really good unit tests also verify that the module checks its output as well.

2. Graceful Failure

Graceful failure means that, when the module is confronted by input that it can’t either correctly process or disregard, its operation fails with a clear notification to the system of what happened and that it recovers to keep operating on the useful inputs. Testing failure modes goes hand in hand with the error trapping described above. It should be done in coordination with the design of the overall system so that the test verifies that the error notifications are what the system needs to initiate corrective action.

3. Communication Interface Verification

Communication interfaces should be verified during unit testing as well as during integration testing. One of the most overlooked aspects of code modules is how they talk to the rest of the system. Typically, there are system specifications covering data and command communications between code modules that result in boiler-plate code snippets that are inserted into each module for that purpose. The assumption here is that this is sufficient to guarantee proper transfers. In reality, the mating of these routines to the module’s functional code must always be thoroughly verified to see that it performs as expected under all the conditions of the module’s usage.

Just Do It

Most explicitly, unit testing has to be planned into the development process the same way writing code is planned. It needs to be managed and its results continuously reviewed to assure that the system design is a coherent representation of the specifications it was built from. Effective unit testing is one of the best ways to find defects as early and as inexpensively as possible.

Posted on

What Can Risk-based Software Testing Do For Your Business?

Risk-based testing (RBT) is exactly what it sounds like, testing your software based on risk. We know from last week’s blog that circular definitions like this annoy me, so let’s break RBT down to some measurable components.

The first and most obvious question is: what is a ‘risk’? Once we clarify what a risk is in the software environment, we can discuss how to measure it and what value risk-based software testing offers your company’s QA efforts.

Jenny Bramble, risk-based testing guru and accomplished conference speaker, defines software risk as a combination of the impact of something happening (a feature failure), and the probability of that something happening. Here’s an example: You’ve developed an app that requires a login to access. If the login flow has the hiccups, the risk is your user won’t login, therefore impacting user experience and business needs (like revenue generation). Yet the other half of the RBT equation asks: what’s the likelihood of this risk, the login issue, happening? Well, both impact and probability can feel subjective, so let’s back up a step…

From my brief example you can see that defining risk based on impact and probability requires a combination of intuition, experience, and data. Defining risk, and risk-based testing in software quality assurance, may be easier with some background on the concept. So, where did RBT originate?

The Evolution of Risk-Based TestingConcept of risk in software testing

RBT has its origins in Risk Management, one of 10 areas in which project managers should hold competency in according to the 2018 International Organization for Standardization’s (ISO) guidelines. Risk management is useful across a broad range of industries, although methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety. Despite industry differences, all risk management strives to identify, evaluate and prioritize risk to allow a coordinated and cost-effective use of resources in controlling or eliminating the risk.

Risk-based testing is then a project management mindset rooted in the drive to understand what risks are going to have the greatest impact on a project and what risks are most likely to happen, then creating a test plan around both. RBT is used in many industries, but it’s especially useful in software because, let’s face it, there’s simply no way to test every function or code aspect for every eventuality. Any experienced PM worth their weight knows a project will have limited resources with a finite amount of time, bodies, attention and expertise. In software QA, focusing those resources where they will optimize ROI is the focus of risk-based testing. As Bramble says: “Even if we had unlimited resources, testing everything is ridiculous, impossible, and a terrible use of resources.”

Benefits of Risk-Based Software Testing

In the software quality assurance realm, risk-based testing presents multiple benefits. RBT first asks a team to define terms. This creates the framework to talk about risk and communicate cross-functionally. When all stakeholders understand terms the same way, dialogue surrounding decisions and needed resources becomes much more clear and concise (miscommunication is also a ‘risk’, right?)

Once everyone’s on the same page with terms, be they management, developers, or external users, it becomes easier to identify the risks each group perceives. This also creates engagement from all the stakeholders in a project. (and we know not having ‘buy-in’ at all levels is risky, too…)

Aligning stakeholders and creating shared terminology is just the first step. Remember that combination of intuition, experience, and data I touched briefly on? These aspects are at play in what I’ll call ‘RBT Phase 2’. In phase 2, defining and prioritizing risks is the goal. Here, the work of making a — very — educated guess about the probability of failure begins. A good place to start the process of identifying risk is by answering the following questions:

What features do users interact with the most?
How do the users interact with those features?
How likely is it the features will be used the ‘wrong’ way?
What part of the code is the most fragile?
Where have you seen failures in the past?
Are there outside influences that impact risk?
Overall, what is most likely to break?

You can see that answering these questions is one-part intuition, one-part experience, and one-part data. That’s a lot of parts for one person to offer input on! This is where phase 1 of RBT, defining terms and engaging stakeholders, pays off. It’s likely different stakeholder groups will have different answers to the above questions, generated through different experiences with either the process or product. Using these different perceptions during risk analysis will ensure that the dreaded ‘intangible risk’ — the risk never identified — doesn’t bomb your project. And, you’ll have project-peace-of-mind that those risks with the highest probability and the greatest impact have been eliminated.

Risk Matrix for Risk-based Software TestingRisk Analysis Tools

There are as many methodologies for defining risk, and creating an RBT strategy, as there are industries practicing risk management. A favorite in the SQA industry is the risk matrix. Creating a risk matrix is straightforward. First, determine a rating system. You could use colors or numbers, happy and sad faces, rainbows and lightning bolts. It doesn’t matter as long as you are consistent. Then, using that combination of gut-feeling, experience, and data, start at the feature level and assign each perceived case a rating.

From there you can complexify the process as much as needed to refine each risk to individual user stories, then develop test cases addressing each risk. The risk matrix can also be used to identify other risks like those arising from code fragility, integrations, and the external environment. If the matrix conveys the right information, it can be as complex or simple as you like.

Risk-based Software Testing is All About Prioritizing

Software, apps, and website all have one thing in common, they’re notorious for breaking when that one user does that one thing. Code is a complex (and often elegant) environmental interplay of man and machine, developer and user, hardware and software wherein testing for every eventuality is impossible. But, you can focus your team’s efforts to high-risk, high-impact areas through risk-based testing.

RBT offers a framework to talk about what needs tested, and what needs prioritized. Then, your team can answer other important questions like: Where do we test to get the most bang for our buck? What bits of code needs the most attention and what needs the least? What tests are a good candidate for automation? And what should we test during each stage of the SDLC?

In short, risk-based software testing provides your team with clarity and focus to derive a thoughtful testing program, leading to a better software product.

For more information on risk-based testing, follow this link.

Posted on

Software Quality Metrics You Should Be Tracking

Businesses, especially software-centric businesses, have dramatically increased the velocity of their operations. Software development itself has gone from bi-annual version updates to weekly releases via the Agile methodology. Managing this eye-watering rate of change has brought every business function to the necessity of monitoring Key Performance Indicators (KPIs) to keep these processes under control.

The principle concept here is the idea of an indicator. A value that, in and of itself, may mean little but has a direct correlation to the operation of an essential business function. As the indicator value changes, it indicates the stability and movement direction of the process. In software development, these indicators are called metrics because they are used to ascribe a measurable value to a process aspect that is subjective enough to make a simple direct numeric evaluation difficult.

Software Quality Metric Controversy

The controversial aspect of metrics is their empirical nature. The metric itself may measure something as apparently irrelevant as the number of lines of code in a module. As long as the module works, who cares how many lines of code it took to create it?

Yet, by tracking the ease of maintenance (hours to fix a bug), the incidence of code defects and their effects on the overall system, it is possible to directly see a correlation between how large a module is and how prone to defect issues it will be. This makes the number of lines of code per module a useful metric.

Software Quality Metrics You Should be Tracking

Their simple, sheer usefulness has made metrics indispensable as both quality and business management tools. In the realm of software development, metrics tend to fall into one of two groups. Detailed code development metrics are direct measurements of the system code itself and provide valuable insights into the design practices of the development group. System metrics are more likely to be of interest to management as they are overall process indicators of how well the business is converting work hours into revenue generating products.

The following will address some important system metrics and leave detailed development metrics for another post.

Completed Stories Icon - software quality metrics1. Completed Stories

Agile sprints operate on the basis of ‘stories’ that are actually code modification descriptions. They are written up from the standpoint of the desired effect that the described change will have on system operation. This change is parsed into a code development effort and a QA effort to verify that the story’s intention was accomplished without any disruption to other system facilities. It is also given a ‘velocity’ rating of the necessary resources it will consume.

A vital measure of an Agile SCRUM’s effectiveness is its ability to estimate these required efforts and complete its self-assigned tasks. Stories completed as a percentage of stories attempted in each sprint is a primary metric for this measurement. Better than a simple velocity number, this will tell how well the sprint was organized in terms of taking on those stories that could be accomplished and were actually completed in the assigned time frame.

Management can use the completion percentage to detect bottlenecks in the overall mixture of the development and quality efforts. It points directly to the granularity of stories and the effectiveness of the group working on them.

Test Case Successful Comlpetion Rate2. Test Case Successful Completion Rate

Test completion steps back to a broader perspective than story completion in that it looks at the process of verifying the system’s changes and overall operation. Successful test completion means that the code changes are being verified and that the verification process itself is working to validate those changes.

Here management is looking to determine how well the system is faring over time. Are the feature additions/defect corrections proceeding quickly and are they working as desired? A warning sign of excessive sprint velocity is the reduction of the test completion rate and/or a rise in the number of test failures. Of particular concern is test case abandonment indicating that the quality process isn’t keeping up with development.

Escape Rate 3. Escape Rate

Microsoft coined the concept term, escape rate. It refers to the number of bugs that are found in each release after it goes to production. This is one of the most critical measurements of a software organization as it talks directly to the efficiency of both the development and quality efforts.

Development must be both rapid and methodical to keep up with Agile release rates. Quality has to stay on top of verifying every code addition and change to keep product and company market reputations intact. A rising escape rate is a blaring warning that one or both of these operations needs immediate management attention.

User Reviews - software quality metrics4. User Reviews

User satisfaction ratings are right next to the escape rate in their importance to managing software development. Escaped defects are sometimes simply ignored by users because the value provided by the product outshines the detriment of having to work around a system flaw. By contrast, some of the most trivial defects will annoy the user base to the point of mass abandonment of the product in which they occur.

Management needs to carefully monitor user reviews for both their overall satisfaction levels and their specific complaints. User reviews are the most effective way of spotting a management beloved ‘feature’ that users consider an annoyance or even a defect.

Align Quality Assurance Metrics with Business Goals

Measuring everything can become an obsession with a detrimental effect on both development and QA. Choose metrics by direct observation of their relevance to your business process. Choose at least four or five to guard against variations in their relevance to your operations. A metric that works only some of the time is worse than none at all. Above all, be transparent with the working staff about what is being measured and why. When they see the positive results gained from these insights they will begin working toward goals that increase what these metrics measure.

Posted on

Failing at Requirements-Based Software Testing?

Often, requirements-based software testing is defined something like this: “a testing approach in which test cases, conditions and data are derived from requirements.” This sort of circular explanation is annoying — and unhelpful.

What are Software Requirements?

Google defines requirements as “a thing that is needed or wanted”. With this definition in mind, we can apply the concept of ‘requirements testing’ to everyday life. Say you’re holding a dinner party and deciding what to serve. It’s likely you considered your guest list armed with the knowledge that Ann is gluten-free, and Dan is vegan, but you’re not sure about Dan’s partner… you think he may be kosher? You made menu decisions based on your guests’ requirements, and executed your cooking tasks accordingly. Requirements-Based Software Testing - Software Requirements Documentation

We make decisions every day based on requirements, and software development is no different. Here, requirements are business deliverables — the ‘what’ that provides value — when delivered by a product ‘how’. The software value, then, become the ‘requirements’ and we define tests designed to meet business and user needs by exercising specific software functions or probing nonfunctional attributes, such as reliability or usability. Properly articulated requirements are the starting point for everything: project scoping, cost estimation, scheduling, coding, testing, and release management. Starting a project without a proper analysis of requirements is a recipe for disaster — it’s like building a house without a blueprint.

Requirements-Based Testing

Poorly defined requirements can lead to developing the wrong features, or the right features the wrong way. There’s no denying it feels great to get that code flowing but taking the time to develop clearly articulated and specifically testable requirements prior to development will pay off.

Why? Because this: Studies have shown two things. First, that most defects have their root cause in poorly defined requirements. Second, that the cost of fixing an error is cheaper the earlier it is found. Defects found in the requirements phase via ambiguity reviews cost ~$25-45 to fix. However, if a defect discovered in integration or systems testing has its roots in poor requirements definition, the cost to fix jumps to ~$750-3,000 per defect.

A study by HP and IBM estimated the cost of fixing defects discovered in production to be an astounding $10,000+! Poorly defined requirements lead to a nightmare of re-do; of the requirements, the design, the code, the tests, the user documentation, and the training materials. All this “re-do” work can send a project over budget and over schedule. Taking the time to understand, define, refine, and document requirements throughout the software development life cycle pays off, in a big way.

Defining Requirements for Requirements-Based Software Testing

The distinction between ‘what’ and ‘how’ is often a source of push-back against requirements-based testing. Software design is ‘how’, but the requirements are the ‘what’. The software design must meet stakeholder requirements.

Here’s an example scenario:

eCommerce Website RequirementsYour team is tasked with building an eCommerce site. To be successful, the site has many jobs to do. From the business perspective it needs to generate revenue through sales, secure user data, and allow monetary transactions while also updating inventory and coordinating shipping. From the user perspective, the platform should allow a user to create an account; to search for, find, and compare items; offer a shopping ‘cart’ in which to place their items; and end with a ‘checkout’ stage.

All these tasks should flow seamlessly and be intuitive. Sure, this is a simplified scenario, but you can see that by defining the requirements of the ‘what’ (it should DO) you’ll get to the ‘how’ (it should do the thing). This is exactly why time spent documenting and testing requirements isn’t paralysis by analysis, but a large measure of forward-thinking in the development process…

Testing Requirements: Best-Practice Tips

It might feel like paralysis by analysis to do the hard work of defining requirements, so let’s look at three easy maximums to remember when developing requirements.

1. Requirements Definition Begins with Clear Communication:

Poorly defined initial requirements, and lack of clarity and depth to the original specification document, are frequently the primary cause of requirements creep. Project Managers must be able to clearly communicate to developers the requirements of the software product being developed. To do this, clearly defined business objectives for the project need to be in place — from the conceptual phase through delivery.

2. Eliminate Ambiguity in Requirements Documentation:

Write requirements in a consistent style that allows all users to achieve the same understanding of the requirements. Documentation should be explicit and unambiguous. Each requirement should have a logical cause and effect detailing the expected outcome of a specific test action. Bonus tip: well written requirements can be reused in future projects!

3. Best Practice Requirements Design:

Each requirement should measure a specific ‘what’, should play well with other requirements, and (importantly) not contradict other requirements. Validate requirements against objectives. Without the guidance provided by tracing requirements back to business and project objectives, project teams often rely on their own judgment about whether to include individual requirements. These ‘out-of-scope’ requirements lead to time and budget overrun.

Requirements Based Software Testing – the Takeaways

Defining requirements begins with the first phase of software development where the largest portion of defects have their root cause and the correction of errors is the least costly.

Well defined and documented requirements can then be communicated clearly, concisely and consistently across the project team to develop requirements-based tests focused on meeting project objectives (and staying in scope), while crafting a product that delights your users.

Finally, if you find your project constantly struggling with creep or direction shift, it’s a sign that requirements weren’t developed fully or communicated clearly. Step back, define and refine your requirements using the guidelines above, and take that knowledge to your next project.

We can help with requirements based testing. Schedule a consultation here.

Posted on

3 Ways to Mitigate the Risks of Automated Testing

Test automation holds the promise of enormous benefits for improving product and service quality. It applies computational power to what it does best, the repetitive exercise of software systems. And, like any other powerful tool, misusing it is fraught with risk.

Software development has progressed from the Waterfall method that allowed months and even years for creating, assembling and testing a software release to Agile and Continuous Integration that crank out releases in a week or two. Along the way, software quality assurance had to keep up or get run over. This has created a glut of software products that have been lightly touched by testing only to be slammed by customer reviews.

The rapid release of features and defect fixes in software products and services means that test verification has had to change dramatically to keep up.

Software Testing Processes of the Past

In the past, software testing was the province of a department staffed by manual testers whose job it was to take marketing’s feature lists and verify that they were faithfully replicated in the products. Some of these people were code engineers who understood the system well enough to perform white box testing on it and ferret out deeply buried bugs. Most were techs who could follow test plans and use the system user interface to exercise all that it was supposed to do.

Any test response that looked inappropriate was logged in a defect management system and broadcast to the rest of the organization in a periodic bug report. These reports would be discussed and turned into prioritized action items at meetings of engineering, marketing and quality management. Testing and verification moved in lock-step with a glacial development methodology.

Automated Testing Necessary in an Agile World

Now, quality has been moved onto the front lines. Testers and code developers are expected to work in tandem fixing bugs as quickly as they are found. The Agile SCRUM has stand-up meetings daily in which the ‘stories’ selected for the current two-week sprint are discussed in terms of the day’s progress in both code development and operational verification. Once a fix or feature is integrated into the system, it is expected to be verified and released in a day or two at the most.

This has made automated testing a necessity. It has also made it appear to be the cure-all for getting verification done at or beyond the speed of development. While it offers vast benefits for testing system functionality quickly and repetitively, it is not the cure-all for test development. Test automation needs to be a tool in quality’s kit rather than the default test mode.

Test Automation Gone Awry

As with product code development, the test process to be automated must first be fully understood. Attempting to automate an ill-defined array of tests results in chaotic test results that rapidly become meaningless. This commonly happens when an attempt is made to automate manual exploratory tests which are guided more by the experience of the tester than a fixed test case.

Test automation is a matter of taking tests that are well defined with clearly defined result requirements and making them machine-executable. Automation will not fix problems with misunderstood code functions and product features for which detailed, workable testing has yet to be developed.

Automated Test Script Maintenance is Key!Writing Software test cases

Automated test scripts can be easily generated by test frameworks that will record UI actions and replay them. Unfortunately, such scripts are brittle in the extreme and, if anything changes in the code, the script will yield nothing but false errors. Over-use of poorly designed or maintained test scripts eventually leads to the automation process being generally rejected as being more trouble than it’s worth. Automation requires the commitment of scarce code development talent to script maintenance and these people will be constantly pulled off to write revenue generating code instead of working on tests.

Both of the above are going to be aggravated by the concept of ‘if a little automation is good, a lot is better.’ Attempting to automate testing of system aspects that change either on a regular or erratic basis will cause the scripts to become either obsolete or perceived as code talent sink holes. While automation can do wonders with scripting some test processes, it is a disaster with others. It is not a one-size-fits-all.

Navigating a Safer Path to Test Automation

What to do? Test automation is both valuable and necessary and will provide the boost that the QA process needs if a few simple guidelines are implemented.

1. Management Buy-in

Get management’s buy-in to test automation as a planned process effort with an adequate budget. Test automation is going to require script maintenance and it will be expensive. The payoff from a well-designed and maintained array of automation scripts is worth its price but worth nothing if it is abandoned because the budget dried up.

2. Choose the Right Automated Test Tools and Frameworks

Determine before starting that the tools/frameworks to be used are suitable to what is to be automated and how that part of the system under test works. The framework on which tests scripts are built must provide the flexibility and depth of functionality to stimulate the system under test and correctly record the results that it is expected to see. And expect to spend money for staff training on the automation tools as it will pay for itself many times over.

3. Carefully Determine What Tests to Automate

Layout which tests are to be automated because they are repetitive and lend themselves to this type of testing. Avoid the idea that all manual testing can be replaced with automation. This is how automation projects turn into costly failures. Automating user interface tests means that the scripts must change each time the interface is changed which will make the scripts as expensive as the UI code. The opposite end of this is that automating tests of API calls is an excellent use that puts the computer testing another computer.

A simple spectrum of automation viability is:

Automated Testing ROI - Software Testing

Test automation is a powerful tool, but it won’t do everything. Plan it, manage it and fund it properly and it will be a major help in navigating the Agile development crush.

Posted on

Fast Food and the Crowdsourced Software Testing Phenomenon

Crowdsourced software testing like fast foodCrowdsourced usability testing has marketing appeal. They hook you with mentions of the big players…Adobe, Jet, Oracle, and more. On the surface it looks good. You simply sign-up (and pay a fee), submit some questions, and your tests are magically sent to on-demand users who offer feedback. What could go wrong?

Crowdsourcing your software testing is like eating fast food.

Sure, it can taste good, but is it the best thing to put on your plate day after day?

Crowdsourced software testing promises right now, and real fast, user feedback. But will it truly serve your best interests? You developed your software based on two things: a brilliant idea — and the desire to generate revenue. To ensure adoption — and revenue — you need test feedback that’s more like a four-course meal at a five-star restaurant! Can you trust random internet users to offer educated feedback, and can you trust yourself to create usability tests based on a deep-dive into testing methodologies?

Let’s talk about this…

What is Usability Testing?

Let’s back up and define usability testing. The cut and dry take is this:

Usability testing is a method that measures an application’s ease-of-use from the end-user perspective. It is considered ‘non-functional’, because here you’re testing how the app operates, rather than ‘functional’ aspects like how the app integrates with hardware. Usability focuses on the application’s ease-of-use from the end-user perspective and is often performed during the system or acceptance testing stages. The goal of usability testing is to determine whether the user interface and aesthetics of an app meet the intended workflow for various processes, such as logging in. Usability testing is a great way to check that separate functions and the system, is intuitive to use.

System testing is the holistic overview. It evaluates the complete system to ensure it meets specified business and user requirements. Yes, this is best done near the end of the development process. Acceptance testing is the last phase of functional testing and is used to assess the software as ‘ready for delivery’ You might know this as Beta testing.

Both system and acceptance testing typically take place near the end of the development cycle. This timing seems to make crowdsourced usability testing a good fit, right? Not necessarily! There are many benefits to be had by wrapping user testing into earlier dev processes. Today’s software development life cycles (SDLC) are dominated by Agile and Continuous Development methods based on quick iterations and releases. As the pace of software development intensifies, there are many advantages to including usability testing in these ‘shift-left’ ideologies

Issues with Crowdsourced Software Testing

Crowdsourcing testing is an after-the-fact approach to app development. The most heinous examples of crowdsourcing can be seen in the app market. We’ve all experienced downloading an app that sounded great, but in reality, was so buggy we uninstalled it after one or two attempts at use, right? Some developers think they can skip formal testing altogether by having actual users do testing with their live app! By skipping formal testing, they force their revenue-generating users to become their unwitting testers. This is not a happy place for a user to be! In this scenario sometimes the app survives, but most of the time it doesn’t.

Ok, so none of us wants to be the homegrown developer who releases untested apps to market. What then? The next level of testing comes people using mobile phonesfrom websites who claim to manage your testing for you. One popular website for crowdsourced testing proclaims “In just two hours, capture the critical human insights you need to confidently deliver what customers want and expect!”

This sort of marketing spiel makes me uncomfortable. I start asking myself: “Seriously? Results in 2 hours? Do these companies have people just sitting around waiting for me to decide to test my software? Who are these testers, anyway? What’s their experience with testing, and can they articulate feedback that means anything? Or, are they just clicking around trying to break my app or website?”

Then, a little bit of panic sets in…do I know how to design a user test? What sorts of questions should I ask, or user pain-points should I look for? If you look at the steps necessary to become a “tester” for these crowdsourced platforms the marketing often reads something like “No special knowledge necessary!”. But stop and ask yourself, “is this who I want testing my software?”

I don’t know about you, but if I went through the trouble of building software to enhance my business revenue, I want testing that’s thorough and professional — not rushed like a double-double with fries at the drive-through!

Benefits of Professional User Testing

Considering the potential pitfalls of crowdsourced user testing helps us understand the benefits of hiring a quality assurance company. Real QA experts know what the most common causes of app or website failures are. They are technology superusers and have strong backgrounds in testing methodology. They are NOT random internet users! And these are the folks I want testing my software.

At this point, you may be wondering why enterprise-scale companies like Adobe or Oracle would consent to have their logos used on crowdsourced testing websites. I know I was!

After following the proverbial rabbit down quite a few holes, reading whitepaper after whitepaper, here’s my conclusion. Large companies finding a bug using use case testingcrowdsource their software testing for one purpose only. That purpose is to get surface level feedback after updates or new features are added. There’s also a caveat, they do this by having their very large, and very experienced, QA team create the tests they submit to the online crowdsourcing platform. And that’s the kicker. The usefulness of crowdsourced software testing is only as good as the tests that are designed. To design ‘good’ tests, you need to have expertise in testing.

So yes. big names do occasionally use crowdsourcing to verify a small iteration. But, they don’t use crowdsourcing as their primary means of testing end-to end.

A final benefit of using an actual software quality assurance company who is under contract with you to provide testing is the ability to ‘wrap QA into the build’. Today’s crowdsourcing platforms often tout ‘testing at the speed of development’ as a benefit. But is crowdsourcing truly different, or better than, managing your QA with a team dedicated to doing exactly that? Real quality assurance companies commit a team to working with your developers through the SDLC. The Agile and Continuous Delivery methodologies doesn’t change the fact that QA experts are…well…experts. And that expertise includes in-depth, practical, and specialized knowledge of ALL forms of testing, including User Experience and Usability testing.

So, what are you in the mood for now? The drive-through? Or, the 5-star restaurant?

Posted on

Outsourced Software Testing: Friend or Foe?

The standard argument for outsourcing has been done to death. As in, “We all get it. ‘Nuff said.” The term itself originated in the 80’s to describe contracting with external entities to provide “an exchange of services, expertise, and payments”. Outsourcing, in its most basic interpretation, is a business maneuver to save money. By ‘saving money’ I’m referring to companies who outsource manufacturing by locating their facilities in areas with fewer restrictions or a lower employment wages with the goal of increasing revenue. This definition casts a dark shadow on any discussion of the idea, and unfortunately, such limited understanding only serves to exacerbate the divide between in-house and external teams. Outsourced software testing is a means of expanding and enhancing a product offering, not a ‘make-it-cheaper’ ideology.

What if we understood outsourced software testing services to be a method which facilitates an organization’s focus on core competencies, while fueling innovation through external input and mitigating a shortage of skill or expertise in build-specific areas? Augmenting your software test team externally might fill your need to address a gap in your in-house expertise, or to test a product on platforms it doesn’t make sense to purchase.

Outsourced software testing services aren’t a ‘replacement’ for your employees. Even in this view, outsourcing can save your organization time and money by focusing SDLC actions to best utilize individual or team skill-sets!

What If Your In-house QA Team Gives Outsourced QA the Evil-Eye?

Here’s the thing: if you are a project manager with a high-level view of business requirements, you are responsible not only for SDLC processes, but those small details — like the build budget. You ask yourself questions like: “Does it make sense to invest in several mobile devices to use in testing…or could you confidently outsource this to a company with the hardware (and experience!) to complete the tests?” evil eye outsourced software testing servicesFurther, say you decided your project budget allowed for the purchase of several different mobile devices. Now you ask yourself, will we ever use this hardware again? (Think about the bride’s wedding dress…or the groom’s tux. Who spends that much money on a one-and-done? And don’t give me that argument about saving the attire for your kids. Fashion changes fast! Just like hardware does…)

Today’s business reasons for outsourcing QA go beyond the original ‘cost saving’ paradigm. Outsourced software testing allows smart companies to provide stronger releases, at a quicker pace and your employees need to know this! In-house teams are vested in both their work and their jobs. It’s safe to assume they’d like to do right by their employer and to keep their paychecks rolling in! A difference in understanding business strategy can translate into your tribe regarding outsourced QA as a threat. Here’s where you, as a manager, can help alleviate employee concern about outsourcing.

Outsourced QA Tip #1 – Communicate

Internally

Outsourcing can confuse employees who don’t understand why you are doing so and add challenges to the daily workflow. Confusion (or resentment) creates a less-than-positive work culture, which can severely impact productivity. The best way to deal with this is prevent it from happening. How?

Provide transparent communication to your team about the reasons for outsourcing. Frame outsourcing as a strategic business decision communicate with your outsourced software testing teammeant to achieve objectives through targeted use of outside resources with appropriate skill sets. Position outside resources as partners (NOT competitors!) and an extension of the existing team. Highlight your internal team’s areas of expertise and remind them that outsourcing will allow them to focus on what they do best!

Externally

To facilitate partnership, it’s best to establish workflow and communication protocols early in the collaboration. It’s true that outsourcing often brings workflow challenges, and the area most likely to be a workflow pain-point is cross-team communication. Clumsy communication protocols can hinder progress, create redundancy of effort, and lead to lack of clarity. These three issues have the potential to negate each and every expected benefit of outsourcing. Create protocols for information sharing, define workflows (who does what), and build a shared strategy, then lay it all out on the table. Communicate (and enforce!) these details to both teams with honesty and confidence.

QA Outsourcing Tip #2 – Scrum!

I’ve worked with teams as both the outside resource, and the internal team member. I’ve seen Slack channels get garbled and email threads get lost. I’ve felt the frustration of seeking clarification through multiple points of contact and getting different answers. I’ve had to re-do days worth of work because an ‘inside’ contact went on vacation and didn’t leave process updates for the external team. (Yes, really — and oh so frustrating!)

From this experience the biggest piece of advice I have is:

A daily Scrum via Hangouts or Skype is the key to cross-team communication!

Yeah, I said it. But seriously, dear reader, if you find yourself in the position of managing a project with outsourced talent, please schedule daily or weekly video conferences. Just like an internal Scrum, an external video conference strips away the communication clutter. A 15-minute chat resolves issues far quicker (and with less misunderstanding) than a 2-hour Slack conversation and putting voices and faces to names will help integrate both teams. It’s much harder to feel threatened by someone when they’re no longer a faceless name in an email thread.

Final Thoughts on Outsourced Software Testing

One area with a large impact on communication and workflow is whether you’ve outsourced onshore or offshore. Offshore software QA will complicate communications in additional ways including differences in time and culture. If your partner is on the other side of the globe, Scrums become much harder to coordinate. How will you address this?

Cultural differences can be un-examined barriers to progress and often ask employees to address unconscious biases or modify unconscious behavior. You may find Professor Hofstede’s research into the 6 Dimensions of National Culture a useful tool when prepping for offshore collaboration.

To conclude, the most helpful thing management can do to remedy both internal and external concerns in outsourced QA is communicate. Yes, really. My suggestion might feel like something out of ‘one of those’ magazines, but maybe they’re not off the mark. Communication is a vital part of any relationship and nowhere more so than in outsourced software testing. Build your business relationships on a solid foundation of targeted communication protocols and you’ll realize the power of outsourcing!

Interested in Outsourced Software Test Services? Contact Us Today to Alleviate Your QA Headaches.

Posted on

Is Scriptless Test Automation All It’s Cracked Up to Be?

The Agile software development methodology has compressed the traditional months-long software development cycle into days. To support that kind of release speed requires QA testing to be performed concurrently with development, often with test planning done even before the first line of system code is written. This headlong plunge into continuous code release has not removed the need for testing. It has made software QA an absolute imperative.

Test automation has evolved into two basic aspects. In one, the QA engineer (all but indistinguishable from a code engineer) must write test scripts that are programs themselves and maintain them accordingly. In the other, test scripts are created via the test framework itself either by recording user activity or using keyword designs that become a meta-test language allowing the script structure to be separated from its content. This second version is called by the misleading term scriptless test automation.

The Evolution of Scriptless Test Automation

The pursuit of faster and faster release turnover has driven a need for testing processes that can keep up and test automation appears, at first glance, to be a perfect fit. Automation is viewed as offering the ability to have the computer test its own software via test scripts that can be continuously reused on command. As good as it seems at first glance, two issues have arisen with this perspective.

Test automation scripts have to be written in the first place and, once written, they have to be maintained because they must change with the modifications to the code that they test. Worst of all, these generation and upkeep processes tend to consume the very talent resources that are desperately needed to write system code in the first place.

The Robots Are Coming!

Scriptless automation is an attempt to render the script creation and maintenance processes as painless as possible. In one of its incarnations, Scriptless Test Automation Robotsscriptless automation is performed by having the automation framework watch the system’s operation and record functional actions to be replayed at the user interface. Such tools are commonly used to facilitate a reduction in the size of the manual test force that is typically charged with operating all the system controls according to a test plan. Simply take each test and record its operation once then replay it as needed.

The second version goes beyond the first in that it allows the test designer to frame the test process as a series of keywords and write scripts as near-English sentences. These keywords are associated with test code fragments that the test framework links together and executes by following their use in the higher-level scripts. A series of test scripts can be described in a comma separated values (CSV) spreadsheet and executed directly from it.

Why Use Scriptless Test Automation?

Scriptless test automation is commonly promoted as QA’s future, especially by the companies that make automation test tools. Like all powerful tools, it has its strengths and weaknesses.

Its single greatest draw is avoiding having to hire QA engineers who have the programming skills to code and maintain the test scripts. This goes beyond simply saving on personnel costs. There is great value in being able to push the design of system tests back up the chain into the hands of the people who have specified the parameters of the product in the first place.

The keyword scripting process allows the system tests to be generated by those who know exactly what the system is supposed to do. It also facilitates test design before code is written because it is specifying the structure rather than the exact content of the test scripts.

For organizations that are using Design To Test (DTT) methodologies, this means that the test scripts can become the design documents for the system itself. The system code will be written to perform the desired functions because it will be written to pass tests that explicitly describe those functions. As the system code evolves, the test code blocks that the keywords address will evolve with it and propagate those changes across the scripts as they are integrated into the system.

The Pitfalls the Scriptless Automation

This works extremely well for high level tests, but what happens when the test inevitably must get into the guts of the system software to exercise critical functions? Here, the maintenance aspect rears its head again. Tests of middleware business functions can be written with high level keywords, but API tests can be much more difficult, especially when negative testing is required to test error trapping.

When the function to be tested gets down to the most granular level of the system code, the need to write very specific test code begins to obviate the value of keyword scripting and negates use recording altogether. Scriptless automation has a lot going for it, but it does have its limits.

Posted on

How to Write a Software Test Case Like a Pro

Writing software test cases isn’t something to take lightly. Aside from coding and testing your app, writing test cases is the third ingredient to consider in the ‘successful release’ triumvirate. Does this sound like a lot of pressure to place on a task often regarded as mere ‘writing’? It should — because it is. Poorly written test cases directly impact testing, and therefore revenue. This holds true whether we’re discussing a customer-facing or internal platform. Either type of software will suffer if test cases are designed by someone who doesn’t understand the value test cases bring to the testing process. In short, writing test cases is an adjunct to stellar development, not an afterthought.

What is a Software Test Case?

The surface level definition of a test case is quite simple. A software test case determines a set of conditions in which the software is checked to ensure requirements are satisfied, and that it functions as specified. A test case is a set of instructions covering a single instance of how and what to test. Test cases include: a title, a description, exact test steps, expected results, and observed results. Straightforward, right? Well, yes — and no. Designing test cases like a pro is more than simply writing “Check compatibility with all browsers”!

Who Writes Software Test Cases?

Other than unit tests, test cases should be written by the quality assurance (QA) team. While quality is everyone’s job, it actually is the job of QA to manage the quality process from idea to release. Pro-level test case design asks the writer to take a deep dive into the application under test (AUT) through both the business and user perspective. This sort of deep-dive creates extremely fine-grained attention to detail. Test case writers ask themselves: What is the test goal? To validate what the software is supposed to do? To validate what it’s not supposed Writing software test casesto do? To find out if the software breaks at a certain point?

To adequately test more than the ‘happy-path’ imagined by the developers, test cases must be designed to reflect the way non-developers interact with digital offerings.

Still, simply creating test scenarios with an in-depth understanding of the AUT and its users isn’t enough. The pro test case writer will acknowledge their audience, purpose, and goal — and compose accordingly.

One method of translating test requirements into accessible ‘everyday speak’ for testers is embodied by the ‘plain-language’ movement. Plain language is writing designed to ensure the reader understands as quickly, easily, and completely as possible with a goal of being easy to read, understand, and use. Plain language avoids verbose or convoluted language, and technical jargon. This means test case writers shouldn’t feel trapped by the linguistic expectations of the developer discourse community, but also not feel compelled to ‘dumb-down’ their writing. After all, if our tester gets lost in the jargon, what good is the test case? Here, the optimal test case writer is the QA teams’ technical writer who understands the UAT, the development process, and the audience.

The Value of Test Case Design

The value of test case designTest cases add downstream value by enabling anyone to go in and retest using the written case. Well-written test cases can mean the difference between making or breaking functionality and user satisfaction when releasing updates or adding new features. Test cases are powerful artifacts for the future, as well as acting as a repository of information about the way a system or feature works.

Summing the value of test cases, we see the following:

  • Ensures comprehensive test coverage by addressing all aspects of functionality, including business requirements, user needs, hardware variances, and out-of-spec user actions
  • Properly documented test cases are reusable, allowing anyone to reference them and execute the test.
  • Test case documentation acts as a knowledge-base for platform details

Now that we’ve looked at the ways well-written test cases impact software development, and the skill-sets a test case writer should hold, let’s explore a few high-level steps you can take to ensure your team writes quality test cases.

6 Tips for Writing Software Test Cases

Use a Strong Title
  • A good test case starts with a strong title. Best practice indicates naming the test case to explain the module you’re testing. Example: When testing browser compatibility, a title might read “Steps to test application to browser compatibility”. Titling test cases with relevant keywords also allows for quick search referencing in test case databases.
Include a Detailed Test Description
  • The description should tell the tester what they’re going to test and include any other pertinent information such as the test environment, test data, and preconditions that must be met before the test is executed (such as an OS or browser specific version being loaded)
Include Assumptions
  • Include assumptions that apply to the test. This might include where the user starts the test at (order page? search page?)
Create Clear and Concise Test Steps
  • Test steps should include the data and information needed to execute the test. Nothing less, and nothing more. Don’t leave out necessary details, but keep them clear and concise (see: plain language!)
Include the Expected Result
  • This tells the tester what they should experience as a result of the test steps and determines if the test case is a “pass” or “fail”.
Make it Reusable
  • The pro level test case is reusable and provides long-term value to both the software development and test team. When writing a test case, keep this in mind. Save time — and money — in future iterations by re-using the test case instead of rewriting it.

Final Thoughts on Designing Test Cases

Anticipating user needs, and integrating those needs with business requirements, is crucial to finessing your app before deployment. This is a job for a professional! Don’t relegate test case writing to an afterthought or view it as ‘just another administrative function’. Test case design has the potential to impact development, whether that impact is positive or negative depends on the skill of the test case writer.  Let your QA team handle test case development, and rest assured the right people are doing the ‘right things’. Oh, and don’t forget to use plain language! 🙂

Need help designing test cases? We can help!