A general discussion about the pros and cons of both approaches.
Writing detailed test cases can help with onboarding new starters, and can even act as a training medium. Conversely, having to trawl through test steps that teach you how to do things that have become second nature can be irritating for more experienced members of the team.
How would you achieve a good balance between these 2 extremes?
Overly detailed test cases are best for new starters,they give step-by-step help.
Assumed knowledge test cases are for experienced testers they expect prior understanding.
For example
Overly Detailed Test Case (for new starters):
-
Open Chrome browser.
-
Go to https://example.com.
-
Enter “testuser” in the username field.
-
Enter “password123” in the password field.
-
Click the “Login” button.
-
Verify the dashboard is display
Assumed Knowledge Test Case (for experienced testers):
Login with valid credentials and verify successful login.
The way I view it is Test Cases aren’t for me. They’re for my Product Owners so they know I tested the product and validated behaviors and workflows. They’re also for those who come after me or if I ever take a day off, someone else can execute it.
Quality is everyone’s responsibility and if I write test cases so they’re streamlined and based off only my knowledge of the service, then Quality becomes my responsibility since I’m the only one who can execute correctly. I much prefer more detailed test cases.
That being said, there is a line of too much detail and we should practice some values of DDD and that whoever is in there should have basic understanding of the ecosystem and we can start from a basic understanding and build from there. Too much detail can also leave the human part out of testing and if it is THAT granular, I’d recommend automating that since human nature is to create skips and do things a little different each time.
If the test cases will be reviewed then test cases becomes subjective because in those cases we don’t have to only write the test cases that we understand but we also have to write the test cases that the person who will review understands.
Detailed test cases can be helpful but i belive writing smart test cases can be more beneficial because nowadays people are running short on time, so adopt a smart approach, look for scenarios where multiple test cases can be club together and write single test case instead of that.
Also avoid assumption, have a clear understanding of the project requirement and end-user before writing test cases, it will help you in avoiding re-work.
In My opinion , atomic test cases is a good way to start a new project or test repo. They can be combined together for regression or smoke or E2E. Not to mention , for someone who is new on the team, atomic tests dont tend to overwhelm them with functionalities and flows.
Keeping reusability in mind is key here. These tests can then be easily automated. Less time for SDETs to think on how to segregate them into different methods.
Ofcourse the essence is in the details of the steps and how easily readable they are.
Worst case I’ve seen a few times is giving a new starter some detailed regression test cases and getting them to run these manually under the false and harmful pretense that they will learn both testing and the product by doing so. It’s lazy management.
The new starter element is interesting. Often I see definitions of the difference between a tester and a QA with the former being very simplified. For those that coach new testers it seems we are looking at a reality of around six weeks to advance from one to the other. That coaching will include discovery, exploration and the use of tools and test techniques but importantly how to think about testing and its value.
Writing detailed test cases if too much emphasis is put on this can be harmful, test cases are not testing. In enterprise level companies sometimes even after years, if all of the testing has been test case focused a six week new starter can have more advanced testing skills.
Test design and test techniques are important so there are training, coaching and application of those skills being very important early on.
This may involve a starting point of a detailed test case then you work together on a more useful approach. I remain very wary that detailed test case creation and execution for new starters is lazy management and prone to picking up some bad habits or limits the thinking around testing.
Test ideas, test designing, test value, risk analysis, critical thinking, pair testing, instilling a passion for testing can be better ways of learning about testing.
It all depends on what kind of team you’re trying to build and what customers you’re trying to serve. I’ve worked in a test team before that was a strict government contract and everything from requirements to UAT required sign off and evidencing. So for a new starter to be aware of that is important, so maybe seeing detailed test cases would help them because you would assume they’ve passed scrutiny…it doesn’t mean its a great place to work for expanding your creativity.
These days for new starters, I’m more interested in them gelling with the team and understanding our culture and that we’re all here to help each other first. Then when it comes to products, have a look, have a play, tell me what you think. Then the objectives which is quite simply we want to continuously improve - we want to be faster and/or better and/or more cost effective. Now have a look our test management style relating to the product you played with, can you see any opportunities for improvement?
So for me, you’ve recruited people to provide solutions and improve. Just relying on giving people big regression test cases to onboard says “this is how we do things here” and I would go as far as saying it has a danger of disrespecting the very skills you recruited.
I think the answer can partly be found in the question. Test cases should have enough detail, but not be too detailed. They can assume a reasonable amount of knowledge, but not too much.
But how do you know what’s too much or not enough? You could start by thinking about who and what the test cases are for. What’s important for this test, and what’s of little consequence to the information it should provide? I would also think about the effort required - both for the initial creation and maintenance - and the effort taken away. By the latter, I mean, are you turning the execution of these tests into the most mindless, boring task ever, encouraging the executor(s) to switch off and stop using crucial testing skills? A little bit of ambiguity could inspire different scenarios to be explored, or reveal different interpretations of requirements.
Maybe this post could be interesting for you: How to Write Test Cases When You Hate Writing Test Cases | Cassandra HL
Marie Cruz (@marie.drake) provides an excellent live demo of finding a balance with test cases in Module 10 Lesson 2 of MoT’s Software Testing Essentials Certificate (STEC)
Marie demonstrates how to execute test cases effectively by thinking critically about what you observe rather than just following steps.
You’ll see Marie execute test cases using a structured approach while thinking critically about expected vs. actual results. She highlights key activities like refining test steps, identifying workarounds, and raising clear bug reports with supporting details.
As you watch, pay attention to how Marie balances following the test case with exploring unexpected behaviour, and think about how you can apply these techniques in your own testing.
I separate training a person on the product (which is often the implication at detailed test cases) from the exploration of the product and giving developers and product managers feedback.
Mixing both causes IMO trouble and creates waste.
I’ve taken this discussion to work and now I’ve got some pretty hefty replies to go through. Will try and report back with some points I take from that