As part of my job I have been reviewing alot of manual test suites recently and have encountered many different styles when writing test cases and now curious to peopleās approaches.
How do you structure your test cases? And why that format?
Is this based on how you were first taught, company guidelines or saw it and pinched it?
What one thing you wished everyone would include in their tests?
My own style is structre them as:
ā[FEATURE:FUNCTIONAL] Test as a Question/Confirmationā
then any additional information required can be included in a descriptions section.
eg [PAYMENTS:ADD] Confirm User can add a payment method
The current style is based on the assumation that it will be linked to an automation test at some point, combined with my personal love of batch testing. I find it saves tons of time in testing and reduces duplication if I can search by feature and function. eg [PAYMENTS:ADD]
I was originally taught based on one size fits all compliance based tests, but then a retired tester told me to always write tests or tickets like you are about to disappear for 2 months and the person doing the tests has only a day.
I basically do a product cover outline of the actual area Iām testing and notes of to every branch or leaf.
I wonder how much more could be tested when less had been written. Then the absence would matter less.
I find the advice being a workaround for not educating people on the product. The test case is then also used as manual to teach people not familiar with the product.
If you someone disappears for 2 month, the team and company has other problems than not enough extensive written test cases. Donāt make up silly situations.
Wow what a topic, I could talk a long time on this (and do in team meetings ). Firstly, I donāt believe in ābest practiceā, I only believe in finding the most effective practice for your situation. Itās a moving target.
Currently in summary we write step test cases for sprint stories, with automation in mind. We peer review them to make sure weāve covered story and any edge cases effectively. They get executed and then triaged for adding to regression packs and tagged for automators to automate them. We havenāt really got a naming convention or detailed standards as such, thats about it.
The important aspect is we measure this process (the usual cost, quality and timescale metrics). Are we getting stories tested quick enough? Whats our automation coverage? Are we preventing customer issues? etc. if any metrics indicate improves thats when we huddle and make tweaks to that process.
Now my frustration at the moment is we do exploratory test sessions, but I donāt think we do nearly enough. The automators want stepped test cases so they know what theyāre automating without needing discussions for each one, I get that. But I know Iāll find more quality issues exploring features, in a lot of cases beyond any documented requirement or acceptance criteria.
Thereās a middle ground somewhere and I want to find it. So we are currently toying with whether we add exploratory sessions targeted on each test case or in a separate exploratory test charter for each feature/fix.
For the team & projects I worked with, it ā dare I say it ā ⦠depends.
In some cases, the tests/test cases followed our implemented stories (this was a web shop of sorts). In other cases, we structure the tests around the risks we perceived (this was an app related to healthcare), and in yet another project, we follow the categories, the system under test processes (think for example categories around travelling: What countries, what transportation used, whether or not a visa is needed etc.)
The ābest formatā is the one that gives the best return for the testing effort in the special case. ā And that differs wildly from games, to financial systems, to medical hardware.
A combination of a product cover outline and branch notes is certainly an interesting approach.
The original example of a two-month absence was simply meant to illustrate how quickly key project details can be forgotten in the absence of documentationāespecially the more nuanced aspects. Itās less about the exact timeframe and more about the reality that project knowledge can become fragmented when not clearly recorded.
In practice, weāve all seen how projects can be delayed or paused, and how teams are affected by reassignments, restructuring, or absencesābe it sick leave, parental leave, compassionate leave, or someone moving on from the company. Itās precisely in these moments that well-documented tickets and tests prove their worth.