How do you prefer to structure test cases? What is the best format?

As part of my job I have been reviewing alot of manual test suites recently and have encountered many different styles when writing test cases and now curious to people’s approaches.

How do you structure your test cases? And why that format?
Is this based on how you were first taught, company guidelines or saw it and pinched it?
What one thing you wished everyone would include in their tests?

3 Likes

My own style is structre them as:
ā€œ[FEATURE:FUNCTIONAL] Test as a Question/Confirmationā€
then any additional information required can be included in a descriptions section.
eg [PAYMENTS:ADD] Confirm User can add a payment method

The current style is based on the assumation that it will be linked to an automation test at some point, combined with my personal love of batch testing. I find it saves tons of time in testing and reduces duplication if I can search by feature and function. eg [PAYMENTS:ADD]

I was originally taught based on one size fits all compliance based tests, but then a retired tester told me to always write tests or tickets like you are about to disappear for 2 months and the person doing the tests has only a day.

1 Like

I basically do a product cover outline of the actual area I’m testing and notes of to every branch or leaf.

I wonder how much more could be tested when less had been written. Then the absence would matter less.
I find the advice being a workaround for not educating people on the product. The test case is then also used as manual to teach people not familiar with the product.
If you someone disappears for 2 month, the team and company has other problems than not enough extensive written test cases. Don’t make up silly situations.

1 Like

Wow what a topic, I could talk a long time on this (and do in team meetings :joy:). Firstly, I don’t believe in ā€œbest practiceā€, I only believe in finding the most effective practice for your situation. It’s a moving target.

Currently in summary we write step test cases for sprint stories, with automation in mind. We peer review them to make sure we’ve covered story and any edge cases effectively. They get executed and then triaged for adding to regression packs and tagged for automators to automate them. We haven’t really got a naming convention or detailed standards as such, thats about it.

The important aspect is we measure this process (the usual cost, quality and timescale metrics). Are we getting stories tested quick enough? Whats our automation coverage? Are we preventing customer issues? etc. if any metrics indicate improves thats when we huddle and make tweaks to that process.

Now my frustration at the moment is we do exploratory test sessions, but I don’t think we do nearly enough. The automators want stepped test cases so they know what they’re automating without needing discussions for each one, I get that. But I know I’ll find more quality issues exploring features, in a lot of cases beyond any documented requirement or acceptance criteria.

There’s a middle ground somewhere and I want to find it. So we are currently toying with whether we add exploratory sessions targeted on each test case or in a separate exploratory test charter for each feature/fix.

3 Likes

For the team & projects I worked with, it — dare I say it — … depends.

In some cases, the tests/test cases followed our implemented stories (this was a web shop of sorts). In other cases, we structure the tests around the risks we perceived (this was an app related to healthcare), and in yet another project, we follow the categories, the system under test processes (think for example categories around travelling: What countries, what transportation used, whether or not a visa is needed etc.)

The ā€˜best format’ is the one that gives the best return for the testing effort in the special case. – And that differs wildly from games, to financial systems, to medical hardware.

2 Likes

A combination of a product cover outline and branch notes is certainly an interesting approach. :blush::slight_smile:

The original example of a two-month absence was simply meant to illustrate how quickly key project details can be forgotten in the absence of documentation—especially the more nuanced aspects. It’s less about the exact timeframe and more about the reality that project knowledge can become fragmented when not clearly recorded.

In practice, we’ve all seen how projects can be delayed or paused, and how teams are affected by reassignments, restructuring, or absences—be it sick leave, parental leave, compassionate leave, or someone moving on from the company. It’s precisely in these moments that well-documented tickets and tests prove their worth.

1 Like