How do *you* write test cases?

Since starting my Testing career in my company I’ve found there to be a lot of differences about how test cases are written with some very passionate defences between the Testers. As it’s a large company there are different projects each with their own Test teams and their particular processes.

One side argues that the manual test steps should be fairly high-level for maintainability reasons and that screenshots are a big no no. But the Test team I am with argue that with the complexity of this specific project, lower-level steps and screenshots are worth the time it takes to have to possibly amend test cases further down the line. (It is a notoriously complex system within the company).

I can definitely see both sides of the argument but I’m interested to know what you folks think about this, and others approach test cases.

EDIT: To clarify, I am talking about Manual test steps. I thought ISTQB referred to steps as ‘scripts’, so used that instead, but I might’ve been wrong there… Anyway I’ve just re-edited my post.


I have started to generate test cases using Model-Based Testing.

I generate a model in a model-based testing tool (I use TestCompass for that).

From it, I can generate test cases automatically based on Node, Edge, Multiple Conditions, or Path Coverage.

It saves me a hell lot of time and my team is also happy with it as they know that my tests can automatically be updated if my model is updated.


I wrote here already something:

In general I differentiate:

  • often repeated tests aka regression tests:
    • Here we have detailed manuals in Confluence and create for execution test tickets in Jira, where we paste a checklist. Basically we teach testers and trust them as they get better to do the right things.
    • I guess this comes most next to what you describe?
  • testing new features/tickets, basically discovering the functions for the very first time:
    • This is basically iterative process. I note what I plan to test (based on risks) => I test => I note my findings and, if necessary I discuss them with others => repeat
    • I find typical scripts/manuals here wasteful and limiting. As this is mostly about discovery you can not plan every step details ahead. If you believe and do so, you limit the possibilities of the discovery (of potential problems).

EDIT: So technically I really seldom write “test cases” aka “test manuals”. If I write them I do them not by the industry standard.


Can you clarify if you are talking about automated scripts here?

Are the screenshots used in Visual comparison tests?


How do you write test cases?

I don’t.


Thanks for pointing out this often occurring ambiguity of many phrases in testing.
I see how one can think that they mean automation code.
I also would like to have clarity here.

I guessed that he is not talking about automation, but about things I call “test manuals”. A list of instructions (typically Action and Expected) which a person should execute and add their observations.


Apologies, I’ve clarified in the original post, but I was talking about manual test steps. The screenshots are to identify what the expected result should look like.


Yes you’re right, I was talking about manual test steps, I’ve seen steps and scripts used, but if scripts are more relevant to Automation, I may start just using ‘steps’.


Is TestCompass like DesignWise? As I understand, don’t they generate test scenarios in a test model rather than specific manual test steps?


I do not get your intention here as this would be very ambiguous to me.
I suggest to always explicitly state when using ‘test/s’, ‘scripts’, ‘cases’, ‘steps’ if you refer to automation/code/development or manual execution of something. This phrases standalone mean different things to different people.


@sebastian_solidwork Fair enough. I only just started as a Tester recently so still getting my head around the different phrases. Asking various people seems to elicit different answers, which perfectly describes what you suggested, thanks.


You are welcome in the fascinating testing universe :slight_smile:


We use a good old Excel spreadsheet! (I do mostly manual testing, step by step) The spreadsheet has macros that build a dashboard with graphs on the first sheet.


Hi @baysha can you elaborate this please in short terms, I’m interested in some details behind this statement ( how you justify this to your superior or your management ) if you understand me ? Thank you.


The short answer is “it depends”!

If it’s a bug fix then I’ll usually make notes in the jira or azure dev ops ticket as it’s (hopefully) a one-off test and not going to need to be repeated.

If it’s a new feature it’s usually worth writing test cases to ensure you’ve sufficiently covered the feature (or amending existing scripts if it’s a change to existing functionality) plus I’d check and update any regression/smoke test scriptsused in manual testing for that area along with automated tests.

We don’t go heavy on test cases, just enough for what we need to prove successful implementation of a feature and encourage testers to use their imagination and initiative (otherwise it can turn into a documentation exercise where exploratory testing gets forgotten and less experienced testers forget to test outside of the script).


I don’t write them. It’d be an exception if I did.

First don’t worry about the terminology, everyone uses different terms. I don’t use “manual”, for example, because I figure all testing uses tools of all kinds, and it makes automated testing sound more special or capable than it is. But I also get what people are trying to say most of the time, so I can get a conversation done just fine.

Test Cases
Writing down test steps for someone else to perform is a very costly, tricky and error-prone activity. It’s where someone has a test strategy, so they write down what they think they might want to do to fulfil that strategy so that someone else can intepret what they said so they might understand what the author thought they might want to do to fulfil a strategy they don’t understand.

  • Writing things down takes time
  • Executing more steps takes more time
  • Filling in each step takes more time
  • Maintaining the cases takes time
  • Every detail you include can be interpreted differently
  • Every change made in the product has to be reflected in the details you add, or it’ll multiple people’s time to compare, decide and update them
  • Focusing on details causes you to miss problems
  • Following tedious instructions reduces engagement and attention
  • If you follow instructions you cannot follow up on new ideas, or use what you’ve learned to create new cases, without going outside of the cases. You also can’t ignore what seems low value or outdated without failing to complete them.

Forcing yourself to repeat your steps reduces the chance of you finding anything new, so more details encourages people to deviate less and find less. Even if you believe that it’ll increase the integrity of the tests and make them more accurate there’s no guarantee that people will understand or interpret them the same way, or that the people that wrote it will still be working there.

Computers are for repeating tedious instructions, people are hired to think. You want to take advantage of their ability to learn quickly, see patterns, interpret their surroundings through an astoundingly complex set of heuristics and models to come to useful conclusions, all things that humans can both excel at and improve upon. Hiring people to be computers is wasteful, both in terms of the waste in extremely high cost, and the waste of the power and value of humanity.

My approach is:

  • Never write down a case unless it becomes necessary. It’s costly and problematic.
  • State what I want from a strategy, not what someone else should do to try to fulfil it
  • Stay as high-level as I can, include detail when it’s necessary for clarification
  • Link to resources
  • Only note and report on what’s demanded of me and cannot negotiate, necessary for the current process, or useful in context

So if we’re going with a higher-level, human-based, defocus-friendly, deformalized system then it will fall upon testers to properly investigate and learn your product. If your product, or any part of it, is complex then that complexity should be a red light going off in the tester’s mental dashboard. Complexity is a risk. A good tester will be looking for any way to make that complexity more approachable. Manuals, documentation, training, diagrams, models, maps, websites, comparable products, whatever. One way is to put testers in front of the product and have them engage with it. You want to have insight into what’s going on, so logging, distinct output, strong oracles, easy access to data, controlling and generating test data or anything else that can control states or affect configuration. Being involved early, to see the design and have a say in testability.

If I feel like I need low level steps or screenshots there are two questions I can ask:

  1. Why do I feel like I need these?
  2. Do they have to go into test cases?

Could you replace the steps with learning the product? A manual? Some training? A better or different strategy? Having better control over product state like save states or test data setups? Do the screenshots need to be in test cases, or can they exist in notes, knowledge bases, etc?

Are you using the test cases to help testing, or using them to tell people what to do? If I’m using the cases to be told how to use the product I could be much more effective, or effective at all, if I just learned to use the product. I can leverage my understanding of the product and its context to evaluate risk and look for problems much more effectively.

Eventually you should end up with someone who knows why they’re with enough skill and knowledge to decide how to go about testing so they don’t have to be told exactly what to do, and it will be much less crushing to the soul as a free bonus.

That makes all the formalisation unnecessary, everyone happier, everything cheaper, and you’ll find more problems, faster.


can you elaborate this please in short terms

Simple. I (usually) don’t use test cases at all in my testing. Never needed them (except where we need to coordinate testing between teams or with people outside our company). My managers don’t care as long as I get the work done.

I’m not saying it’s a perfect approach but it works quite well for us.


@danirons …agree with exactly what @kinofrost says here. :+1:


Really interesting, thanks for this post. Would I be right in thinking that your approach is more like exploratory testing? Forgive me if that seems a bit naive, I’m only able to really refer to the typical test techniques and standard processes that I’ve heard of so far.

How do you deal with traceability, documenting and regression tests for example? Thanks.


Exploratory testing generally means two things.

The people that developed and expounded the term look at all testing as exploratory, as their definition of testing includes it. It then becomes about what scripting you choose to employ, where scripting is something that influences your testing but you don’t have control or choice about, one example being instructions that you must follow like written test case steps. The idea is that such scripts can be helpful but they are additions to testing, not replacements for it.

There’s also what you probably see more written about, Exploratory Testing (I use capital letters to differentiate it), which is generally some time a tester puts aside to reduce the amount of time, energy and focus they use on scripted factors. I don’t personally think that the term is particularly helpful, because I prefer to think of myself as the tester, and anything I want to use as a cost/benefit decision. I don’t put time aside to take responsibility for my testing, I just do it, whether that involves scripts or not. I don’t consider automation as software that does testing, I see it as a tool for which the inputs are a responsibility, questioning the purpose of the internals is a responsibility, considering the risks and limitation is a responsibility and interpreting the output is a responsibility. I will also basically never choose to use written step-by-step instructions because I see them as having enormous cost and risk with little benefit. Also I’m an engineer - if I stop solving problems I start creating them.

Forgive me if that seems a bit naive

It doesn’t. The field is rife with both good ideas and many ideas without much research. It’s confusing and difficult, and the industry can be very tolerant of mediocrity, usually through ignorance rather than design, so we have an ocean of possibilities to consider. I spent a decade working in and on the field of testing, this is just the stuff I’ve collected while I was there. Ask me about the history of fashion and I will be asking what a culotte is - which I also had to look up the spelling for and still didn’t check the meaning. The worst part about testing is when you find out there’s a lifetime of reading to do, but luckily, if you like testing, it becomes the best part.

Incidentally I’m obviously not the end of the conversation for testing, you should take ideas from everywhere and make informed decisions.

Replacing Instructions

Traceability is basically being able to tie what you’re doing to why you’re doing it. Going off test instructions means that someone else has made the connection between requirements (written/explicit or otherwise) and the testing. The problem is that if you don’t take responsibility for that connection you cannot understand what you’re trying to do, which limits your ability to think of new problems or otherwise spend your time wisely. That means each tester has to be able to take requirements (written/explicit or otherwise) and make their own decisions about things like coverage. It becomes part of the job.

If you need to communicate what needs to be done, you could do that via charters, which are like high-level instructions that focus on purpose and risk. A charter might read “Test that the login screen rejects non-valid logins. Focus on security concerns and other misuse” or, honestly, anything that you might need to communicate to someone to look into. The tester then tests against that charter, as their mission, with their domain knowledge and contextual understanding, and can tie back what they choose to do to their mission, and also to wider requirements. If you’re interested in being able to better trace the value of what you are doing while testing to your overall mission then I would recommend test framing as a place to start.

The advantages here are enormous, as testers can be told more simple instructions, at any level, and good testing can come out of it. If you have a hierarchical system of testers you will still need the upper tier to be good communicators and have a good understanding of risk and requirements, but the dangers of all of these are lessened. Coverage becomes adaptable, and self-healing as more problems are found. Risks become contextual rather than general, meaning you waste less time testing pointless things. The responsibility of how things are done shifts to skilled human testers, and allows them to use what they learn to improve what they do.

If you’re trying to move to a more deformalised system there’s a few ways to go about that, you could move towards checklists, down the formality scale, including notes for anything that feels particularly important for whatever reason. This also helps us to think about risk more deeply. You could do some kind of coverage map, or move directly to charters, depends on your situation.

Session-Based Test Management is one way to document and manage testing. I tend to use a session sheet to document my own testing, but rarely use associated metrics and whatnot because it’s time I could be spending doing testing. Being able to communicate the story of what you did, what you know, what could be a problem, what you need, etc is its own skill, but comes pretty quickly. I used to write my notes up in OneNote, export as a PDF and attach it to a JIRA ticket, for one example, and this seemed to work well.

Another advantage of these free form notes is that I can use whatever I like to guide my testing instead of instructions. Charters, checklists, risk catalogues, user stories, older charters or ones from someone else. I can also record notes however I like. Screenshots, animated GIFs, screen recordings, associated test data, database records, anything that seems pertinent. If I wanted to and I had the setup to do so I could describe a test I did by including a virtual machine with the exact setup in it and a video of me doing it. That’s not usually worth doing, but it shows the possibilities.

Regression tests usually feel like they’re more powerful than they are. Our ability to repeat a test is far from absolute. Repeatability is its own topic, but if you’re trying to deformalise regression testing I always think that the conduit between formalism and informalism is purpose. Translate the cases into purpose - what risks are the cases trying to mitigate? Sometimes you’ll find there is no purpose, sometimes you’ll find that they automate very easily, and some you’ll find that your testing is insufficient or wrong. When you decide on a system to mitigate change risk in a sensible way then that will cut down on attempts to retest every corner of your product, and you can cover purpose instead.

Don’t forget that while manual cases look appealing, and comforting, the risks still exist, in terms of miscommunication, time expense, limiting exploration and so on. You’re not replacing a perfect system. You need to be able to let go of attempting to replicate situations with cases, and move towards trusting the ability of the testers. A good way is to communicate risk and mission more clearly, perhaps even with a checklist. You will also need testers who commit to being good at testing, because it will take motivation for some people to go off-script. It is more engaging and fun, though.