Skip to content
QualityLogic logo, Click to navigate to Home

10 Questions to Ask Before Writing a Single Test

Home » Blogs/Events » 10 Questions to Ask Before Writing a Single Test

The heart of good QA is based around some character traits and good documentation. We’re focusing on the latter in this article.

Note that a lot of these questions are going to be pertinent regardless of how you are running your Software Development Life Cycle (SDLC), even if you are now exploring how AI might play a part in your QA efforts.

Who will consume the test cases?

Well, it will be QA test team members… right?

Not all the time. Sometimes program managers or developers will want to read them to ensure coverage is met. Sometimes you might be building a User Acceptance Test that will be performed by business analysts or subject matter experts. It is possible that you are building cases to be executed manually, but they might end up being automated in the future. What if plans or people change, and suddenly the people or systems executing the tests lack institutional knowledge.

You have to truly plan around these variables to ensure the cases produced will serve the organization both now and in the future.

TDD / BDD, or no?

You hear less about this now than you used to, and I have a gut instinct about why that is. Test Driven Development or Behavior Driven Development (TDD / BDD) used to have a lot of momentum, but I think it ran into organizational inertia. What I mean by that is there needs to be some level of organizational “buy in” that will adopt this throughout the SDLC.

Let’s say we’re talking Gherkin based test cases. The test team or BA has to be comfortable working with this syntax, and the developers have to be comfortable with it. When done right it can be a fantastic way to keep everyone on the same page. If any group is recalcitrant about embracing it though, organizations will drift away from enforcing it. If someone in the organization wants this, ensure there is buy-in from all stakeholders before trying to walk this path!

Where will the tests live?

There are choices here. This will tie into another question here in a few, but ultimately the organization has to decide where tests will be stored. Whether it is in a spreadsheet, dedicated QA test case manager, or inside a project management ecosystem via a plugin, everyone who needs to see the cases must be able to access them. There also needs to be consideration about format.

A lot of efforts start on spreadsheet and then evolve into some kind of software-based test management system. There can be a bit of complexity in some of these systems when it comes to importing tests, so ideally the format and storage mechanism will be determined before cases are created.

Who will consume test results, and how will they consume them?

In this case we should address the latter part of this two-part question. If the CEO just wants an email giving a “Good to go” high level synopsis, but the Chief Product Officer wants to see the detailed nitty gritty, then the way the results are consumed will differ.

Going back to the software test case management, you might want to give the CPO a seat inside the software, but not the CEO. This is about managing costs and giving people what they need. Some test management systems come with significant price tags so the number of organizational members who have a seat might be restricted to reduce cost. This might ultimately change the entire direction that test cases are created, so its worth asking the question.

The first part of the question also ties in to how easy it will be to produce the test metrics and create the reports that each stakeholder needs. Excel can be very flexible on how it is used, but it comes with the overhead of managing a spreadsheet that now has a lot of formulae to break.

What format to use?

This might sound a bit like the question around TDD / BDD, but it is a little different. I tie this question back to who will execute it, and will it be automated tests can be written to such detail where someone who has never touched a product can follow along with the test and execute it. Other tests assume institutional knowledge and are more of a guide to the tester. Some tests are very atomic which is great for engineers to build into automation scripts, and others have many steps in each script. Ultimately there needs to be a decision about how the tests will be built and how granular reporting will be.

Let me try to explain.

I’ve encountered many test plans that have been created by people who have a single test that has 12 steps, and some I’ve seen have had 50 steps. In some organizations this might be fine, but it results in a loss of fidelity depending on how the test is created.

If it is created in a system where each step can be asserted and tied back to a single defect, and the remaining steps can still be executed, this might be fine. Just consider though that the test is still going to only show up as passed or failed. This is where result fidelity comes in. The test will fail, when really there are possibly 50 tests bundled together.

Is it better to have:

  • Passed: 49, Failed: 1
  • Passed: 0, Failed: 1?

Ultimately how tests are formatted and architected drive the metrics, and the metrics help guide the organization in their decisions.

I bet you can guess my preference!

Is there adequate feature requirements / product documentation?

In an ideal world, products have good design documentation that has been through a rigorous evaluation by all SDLC members, and from that documentation tests can be written. The question is whether such documentation exists? If the product has limited documentation, vague acceptance criteria, or no documentation at all, the test cases are not likely to have a solid foundation.

If the answer to this question is “no, we have no documentation,” it doesn’t mean that tests can’t be written, but there has to then be a commitment to the test case creators to support all of the questions that will arise, as they will have to question whether the product works as expected.

Remember! Quality isn’t just about results. It is a culture.

Have you considered maintenance?

This is more about process and manpower than anything, but it is important that the people responsible for the tests need to have the time to update them, and a process for handling those updates. If the product under test is highly regulated there may need to be an entire tracking process for updates, as well as review cycles. Products change, and tests change over time, after all!

Do you need traceability?

This ties back to the maintenance question, but in general terms it is always best practice to be able to identify the origin of a test case, what piece of functionality it tests, who created it, and in highly regulated industries, tracking who signed off on the case. This also ties directly to defects created from a given tests as well as the results for individual test runs.

The simple answer of “yes” to this question will be the impetus to create an entire process to manage cases.

Who will write them?

We touched on this earlier, but I mentioned that I think good QA is based around character traits and good documentation. When we consider who is going to write the tests, we need to consider the skillsets. Do you have the right people to see beyond the happy path.

Describing how something should work is pretty straightforward and is in many ways how developers will ultimately build the code. In my opinion tests need to not only deal with how things should work, but also give a lot of contemplation to the ways in which things shouldn’t work. Whoever writes the tests needs to employ critical thinking and they need to consider positive and negative scenarios.

What questions do you ask that I haven’t thought of?

Why not let us know? Leave a comment, if only to call me a numpty!

In my opinion, effective QA is based around critical thinking, process, and documentation. A sustainable and resilient QA practice is going to take work, and it is going to require that an organization decide what it needs and what it wants before embarking on establishing a QA process.

While test cases are a component of the QA practice, everything comes back to communication and the ability for people to make good decisions when releasing a product. Even if there is an existing QA process, there should be a mechanism for continuous improvement, and these questions may still prove relevant.

It likely goes without saying but we can also help solution this and any other QA problems that might exist in your SDLC, so let us know what you think.

What are your thoughts?

Author:

Paul Morris, Director of Engineering & Accessibility Services

Paul Morris started his career as a chemist with the United Kingdom’s Laboratory of the Government Chemist (LGC). During his tenure at the LGC, he developed an aggressive degenerative eye condition called retinitis pigmentosa, a genetic disorder of the eyes that eventually causes a loss of vision, and he had to leave the chemistry field. However, along with the change, came opportunity. While Paul transitioned to an administrative position with the UK Ministry of Defense, he also began teaching himself how to code. As the course of his career evolved, in 1999, he moved to the United States, where he was offered a job as a test technician for QualityLogic. Now, more than two decades later, Paul is QualityLogic’s Director of Engineering and Accessibility Testing Services.

During his career with QualityLogic, Paul has had the opportunity to explore all aspects of QA testing, while simultaneously benefitting from the use of assistive technologies. He is recognized as an accessibility subject matter expert for both user experience and development and is certified in JAWS technology, a screen reader developed by Freedom Scientific that provides speech and Braille output for computer applications. Paul also has expertise in Ruby, JAVA, Python, and is an SQL administrator.

While a shift from chemistry to a career in software testing may have seemed unlikely, Paul is grateful for the course his life has taken. QualityLogic has inspired his passion to solve the problems he sees now and discovers the challenges yet to come. Paul shares his experiences in QA Engineering and Digital Accessibility often through blogs, presentations, and training of his team and QualityLogic clients.