Test Cases: Do You Use them?

Similar to the certifications discussion, it seems that test cases divide the community in some ways.

A lengthy discussion popped up about this recently and I thought it would be useful to have it on a forum that preserves the history.

I’ll preface this with a quote from the original discussion that is worth bearing in mind

you know your needs and your company needs. What some people recommend against, may be perfect for what you need, and also vice versa. It’s purely situational.

So, do you use test cases? Why?

1 Like

I document the minimum I need to keep track of the scenarios that need to be tested.

In what’s documented as a test case, I’ll typically write a short description of the scenario, with the “test steps” being the critical permutations of the scenario.

For instance, the test case might be “Log in with 2 step auth enabled”. I’d describe the scenario and the allowed methods for the second step of authentication (primary email, secondary email, mobile). The “test steps” would cover which account logs in, what methods the second step should use, whether the user responds within the time limit or not, and so forth.

The only time I have ever written “Click this link, enter this text” type steps was in defect reports for code written by a particularly obnoxious developer (who no longer works with the company) who refused to accept anything remotely technical from a “mere tester” and also would not accept a defect unless you could point to the line in the specifications/user story that the defect contradicted.


Assuming generally that I understand the general context variables,
My general testing steps:

  • feature, specs, docs, e-mails, devs, stakeholders or business people, code, environment, build package touring;
  • setup, create the package, deploy, check versioning of services/systems;
  • charter my testing session with the breadth coverage as the first goal in mind(systems and modifications and data, queries, pages exploration and setup or first checks)
  • continue the same charter with depth analysis/experiments/explorations;
  • in each session, I will collect or save data about setup, write down notes about setup, about what I’ve gone through/experimented with; notes on findings; questions; what I didn’t cover or difficulties;
  • after I finish a session usually I either contact the dev or business to talk about problems or create a report, store it in the task management system - discuss problems later when there’s availability from others;

At some point some managers tried enforcing creation of test-cases for multiple reasons:
proving that some testing was done; re-do the cases when a feature changes; have some sort of documentation of the feature - as none else is writing documentation; have the automation engineers that don’t know anything about the product be able to create some checks; have the business review the cases and re-do them at other times in the future; have some handover documentation to give to future testers; use it as a standard for testing across departments; etc.

I refused that. I wasn’t fired. Things are moving on. Some people left or were fired. New ones came. Arguments will continue. Settling and change in mentality will be a challenge with each new person coming up with this idea.

1 Like

I am a manual tester and I use test cases, specifically written in the Gherkin syntax, logged within TestRail. I use them for a number of reasons:

  • They provide the “known knowns” about the product - the things that are stated as requirements - which allow you to ensure the requirements are being met (or a bug is logged to rectify when they are not).
  • Creating them is essentially the testing, and conducting them is checking they are true.
  • They enable anyone to conduct them (if written in Gherkin) whether they have product-knowledge or not, and testing knowledge or not.
  • They provide a log of functionality, which can be versioned so you can see what changed over time.
  • They provide a list of things that need to be tested, which is good for larger projects, or for those who have poorer memories and cannot keep lots of information in their head at the same time (this is very much the case for me).
  • They give less risk of human error because they are already created, rather than being thought of “on the fly” (which means they can be different each round of testing).
  • They can be used to automate the testing if you have someone with the skills to implement it.
  • Depending on how they are stored and executed, you can create a filter to have specific test cases at a few clicks (smoke test, regression test, etc.).
  • They can always be reduced or expanded upon.
  • They provide “proof of testing” (which should not even be a thing, but it is for some companies).
  • They can teach new staff about how the product works.
  • They can bring to light unknown things that need to be added to the specification (I find this a lot if testers weren’t involved in the specification writing).

I do, however think that exclusively using test cases is negligent, as they only show you the results of the “known knowns”. In order to test the “unknowns”, you need to utilise exploratory testing. But in order to test the “unknowns”, you need to know the “knowns”, which is why these two methods of testing are used to enhance each other.

Once you have tested using these methods, you can then automate the test cases where possible (and if you have the ability/staff to do so - my company does not currently), so you can spend more time doing exploratory testing, and other aspects of testing that help bring up quality across the process as a whole (as quality assurance is a team effort, not something that one person alone can do).

I will however say that test cases get a bad reputation for a few different reasons that I’m very aware of, but still advocate for them if they are the right choice for your situation:

  • They are difficult to maintain.
  • They can become unwieldy if the project is large.
  • Some people think the time spent creating them could be better used “just testing”.
  • They are only useful if they are correct, or executed correctly.
  • They must be based on facts, not assumptions.
  • If used very regularly, can sometimes fall to the age-old “muscle memory” problem and be marked as passing when they are in fact failing.
  • If exclusively used, can sometimes create laziness if people rely on them too heavily, especially junior testers, and don’t give much room to broadening testing skills.
  • They are met with disdain by some people which can cause them to dig their heels in, become petulant, or take shortcuts (I don’t say this to be mean, it is something I have actually observed).
  • If the company has many different projects worked on by different teams, the test cases can sometimes fall to being written in varying ways which causes confusion - it is best to create them all with the same language and process.

I’ll end by quoting Heather’s quote above (who kindly didn’t add my name to it as she knows me well enough to know I’d hate that!)

you know your needs and your company needs. What some people recommend against, may be perfect for what you need, and also vice versa. It’s purely situational.


I currently do not, and I am in general in the advocate against them group. Not because there is no place for test cases, but because they typically are used to serve a lot of different needs. And as with all things when you try to do to many things at the same time you tend to do none of them well.

Common things people try to use test cases for.

Test Design - Thinking about what needs to be tested and this is the documentation from that work.
Test Distribution - A team of testers want to test the same product so they need to share that problem.
Onboarding / Knowledge Transfer - A new member joins I you want to provide some information about the product and testing.
Status Reporting - Where are we with the testing effort.
Test Reporting - What did you test.
Test Instructions - What should you actually do to test a specific thing.
Test Automation Requirements - This is what you should automate.

A quick note on the difference between the test design (the intent of the test) and the test instructions (the steps needed to perform the test) is a commonly overlooked distinction.

Early in my career when exclusively, extensively and rigorously using test cases I had an observation that the best testers around me had a tendency to find bug outside of the test cases. That is, if I performed the test cases and they performed the test case I did not find the bug but they did. And the obvious explanation is that they did something outside of the documented test case that junior me did not. So as a vessel of “test instructions” they did not fulfill the needs.
Later in my career we had a nasty bug in production and testers were asked. Did you test this specific scenario? And looking at the reported test cases that was performed they did not contain enough details to be able to answer that question. So using them for test reporting was not enough. (Think login with a valid username and password what username and password did you actually use?)

And basically for most of the above mentioned things they have been failing at some point. Which then ties back to what others are saying. Your specific needs is what matters and for most specific needs there are better alternatives than test cases. But as a general tool serving all of them to some extent test cases is the only one that can do that.

Some alternatives:
Test Reporting - Screen captures, testing notes are better at capturing what you actually did.
Test Design - Checklists, Diagrams, Models, Story boards might serve you better.
Test Distribution - Testing Tours
Onboarding - Backpacking (you sit behind someone else that does it to learn instead of reading text)

For me it is important that you try to identify what purposes is important to you and then pick a tool that fulfills that best. If that is test cases go with test cases. If it is something else go with something else. To me it is unprofessional to say “we don’t use test cases” and then not explaining how you deal with those purposes. Either your alternative or a strategy that explains why they are unimportant. Commonly I’ve heard testers go “we do exploratory testing” so we do not design or report our testing. That is not “exploratory testing” that is unprofessional testing.


As a developer turned tester, I used to hate this kind of tester, because not only do they check things that are often unclear in the product specification, but their Exploratory testing techniques actually uncover classes of defect that developers never test for around integrations and real-world workloads which throw up defects that are hard to reproduce or debug.

I now love this as a way of getting new team starters to learn how to use the product, by giving them a rough set of tasks (positive test cases) to pick through and to then report back on how they found the UX. The new member gets to learn a bit about the product, and you get structured feedback on how terrible your UI and work flows are from a fresh pair of eyes. (You might even find they fix those bugs in their spare time.)

Unfortunately when an Exploratory tester leaves the company, not only does the Oracle they hold leave, but so to does the capacity. And that’s probably a big driver for having test cases and “scripts”. And hence I love having “captured” test cases if I am new to a product, to show me the workarounds I might need.

I´ve been in very formal places, where the customer receives the test deliverables: Test Plan, Traceability Matrix, Test Procedures, etc and these deliverables were part of the milestones after which the company got paid, so the formats and standards were agreed beforehand with the customer and they asked some standards already created. In order to get a clear organization and divide the testing into different subsystems/functionalities, they were using these organizations: Test Campaigns / Test Suites / Test Cases / Test Steps… in order to get a quick overview and plan the campaign (for example, start with the X Test Cases before the Y Test Cases…

And then I started using it for myself when I can make my own organizations because this helps me see very quickly if all possible cases of failure and all user possibilities are tested, and for me they work really great to brainstorm possible cases that are not necessary into the design or requirements and to document them so that they can be referenced in case of bugs, they have to be repeated, reused or whatever, independent of these tests being automated or manual.


Some interesting replies to this on LinkedIn:

Sometimes you just need that extra time to think before you do.

Most of my test cases are there for the times that others need to run tests. I know this shouldn’t be the case though some developers fall apart and can’t remember what this application is which they have been working on for weeks sometimes years. So without steps to follow they don’t know how to log onto the system (though more likely just don’t want to do the testing)!

I usually don’t. But it could be there so that anyone can be able to test and verify that things really work the way it should!

Repeatability, Repeatability and let’s not forget Repeatability.
A forum to discuss coverage and ensure everyone understands an expected result.
Just a few of the reasons.

No I don’t use test cases - I’ve recently created a feature map to track what we are doing instead. This way I could focus on working through the new feature with the team/exploratory/feature strategy, rather than writing boring myself to death with writing out test cases!


As a tester, my reply would be YES, we do use test cases. Test cases are more elaborate than a user story to interpret and execute it. If we are using test cases then it becomes easy for QA or DEV to divide any build into parts from an understanding point of view also. The validation of any scenario becomes easy from an end-user perspective.

If the test cases are written as per standard guidelines then nobody asks this question if anybody uses test cases or not. A well-drafted test case should always allow a QA a well as DEV sometimes to realize and execute the test.

As the execution is performed by many different QA. So it is important here to keep in mind the reusability of test cases. A reusable test case provides long-term value to anybody and not to put extra effort to write a new test case again and again. It is also true that test cases once written and executed for a build can be refactored as per new functionality or enhancement with minimal efforts.

Write, execute, re-use, refactor are the practices that most of the functional testing services providing companies are doing and saving time for other important tasks.

This one is just my opinion due to we are using test cases and it could be different for others if they are using or not.

I think it’s a bit droll to assume one tester knows exactly what bits of the system are valuable. It’s a huge playfield for bugs. I do sometimes see bad test cases written up. Cases describing what the programmer thinks the user is doing, those test descriptions are what you get in “contract” based environments. I like to delete those. But inter-op and security test cases are terribly hard to include in less formal test sessions, which I have come to prefer of late.

Yesterday I finally got to test a tiny feature which I knew “about”, and had tried to test once before, but just could not set up the environment for. So I skipped it. But because of a release that touched the UI going out soon, I hunted the test case database and could not find this feature mentioned. I buzzed the support desk, nope, nobody uses that feature, no idea how it works even. It’s not even in the product docs. Most companies I have worked for have features in them that for example “Intel” only use and would cost us a large contract if they broke. And these are often features that nobody else uses, so they are never in any mind-map or exploratory charters.

I’m at around 390 mobile testcases, half of these are repetitions annoyingly, and many are just one-liners. All written by a tester who left a long time ago. Unfortunately I am a fan of good test documentation because in this case I had to be able to set up AppleTV without actually having one of the £150 devices. A task that takes 5 minutes, but is actually almost impossible for someone who does not often use and familiar with Apple devices. And yes the feature did work wonderfully, although I still raised a bug on it for a low-cost cosmetic change.

So I’m a fan of test cases that are 2 lines - a good organised mind-map fitting kind of naming scheme, and a 2 line description of what the user is doing, and an outcome.

1 Like