I noticed from feedback that many testers are struggling to find the right balance between writing overly detailed test cases (that are hard to maintain in the long run because the app under test changes), and broad descriptions that are too vague. So we wanted to share some of our tips on writing/documenting test cases:
I hope this is useful to some here. Note that every project and testing strategy is different, so the tips and approach might not be applicable to all projects. It’s hopefully nonetheless a good starting point for new testers learning manual testing. Any feedback welcome.
So my guideline for detail level, “don’t include instructions which you don’t care about”. Most of our customers never read a user-manual, and will always use it in the way you least expect. Thank you for sharing some really good examples here @denngrck , I like them!
I sometimes do prefer more detail, but without being too detailed, for example in there I would have (and in pulling this verbatim out of my own TCMS this one which is a one-liner test: called VWKEY-1.2)
Enter the following text: Café, £$€|\ áéíóú"
Because it forces you to work out how on our OS and language to enter those common unicodes on your hardware by using Alt+Ctrl or using alt+num as the case arises, as an exercise for the tester. Because the test-case does not care about how we do the adding of unicodes to the app and eventually check that the CSV looks legit. While the CSV test case that verifies RFC compliance really needs to include single-quoted strings with commas, double-quoted strings with commas, and of course the famous one of changing the CSV separate internationalization setting, or else the test is not really valid. We cannot expect the tester to know to exercise those 2 corner cases.
But! I’ve actually broken the RFC4180 case now, by adding that detail, because really it’s 2 separate test cases, and that is another clue, if your description is getting long, you might be testing 2 different things. Quoted strings and locale specific separator. Which is fine for manual test cases as a way of saving time, but creates automation scale problems. My context is normally automation, and so I capture the corner cases in my test descriptions, because when I come around to automating a test, I can just automate all the corner cases which someone captured in the test, and split them out in the automation suite. So it’s very dependent on your use, but I’m a big fan of using one writing form and “one” TCMS project for both the manual and the automation cases, because there is a lot of crossover.
I think another cool tip when writing test cases, is to know that you will eventually have to start organising your work. You will need to decide on a hierarchical structure that makes it easy to find tests later on one you end up with a few hundred. Moving test cases breaks a lot of process, so you want to avoid that if possible by having a way to reduce confusion and case duplication. So you want to decide whether you group your test cases based on the component under test or by functional area.
- Some projects like to use the component (Begin a good tester really requires you know how the underlying application is organised/structured anyway since you will need to talk to the correct developer who owns each “part” or component anyway) Example: Accounts/permissions, import/export, networking, Platforms
- Some projects benefit from creating suites per stage in the user journey. And thus they organise each suite to cover a functional area. File-Menu. Security-Menu, Network Menu
Neither choice is better than the other, and often you will arrive at a hybrid of the two or even a third way of dividing suites. But failing to have an agreed upon hierarchy will eventually cause you pain. Spend some time considering how to also prioritize and how to classify test cases based on smoke/regression/release. This will also make test iterations or runs easier.