How do *you* write test cases?

Since starting my Testing career in my company Iā€™ve found there to be a lot of differences about how test cases are written with some very passionate defences between the Testers. As itā€™s a large company there are different projects each with their own Test teams and their particular processes.

One side argues that the manual test steps should be fairly high-level for maintainability reasons and that screenshots are a big no no. But the Test team I am with argue that with the complexity of this specific project, lower-level steps and screenshots are worth the time it takes to have to possibly amend test cases further down the line. (It is a notoriously complex system within the company).

I can definitely see both sides of the argument but Iā€™m interested to know what you folks think about this, and others approach test cases.

EDIT: To clarify, I am talking about Manual test steps. I thought ISTQB referred to steps as ā€˜scriptsā€™, so used that instead, but I mightā€™ve been wrong thereā€¦ Anyway Iā€™ve just re-edited my post.

6 Likes

I have started to generate test cases using Model-Based Testing.

I generate a model in a model-based testing tool (I use TestCompass for that).

From it, I can generate test cases automatically based on Node, Edge, Multiple Conditions, or Path Coverage.

It saves me a hell lot of time and my team is also happy with it as they know that my tests can automatically be updated if my model is updated.

4 Likes

I wrote here already something:

In general I differentiate:

  • often repeated tests aka regression tests:
    • Here we have detailed manuals in Confluence and create for execution test tickets in Jira, where we paste a checklist. Basically we teach testers and trust them as they get better to do the right things.
    • I guess this comes most next to what you describe?
  • testing new features/tickets, basically discovering the functions for the very first time:
    • This is basically iterative process. I note what I plan to test (based on risks) => I test => I note my findings and, if necessary I discuss them with others => repeat
    • I find typical scripts/manuals here wasteful and limiting. As this is mostly about discovery you can not plan every step details ahead. If you believe and do so, you limit the possibilities of the discovery (of potential problems).

EDIT: So technically I really seldom write ā€œtest casesā€ aka ā€œtest manualsā€. If I write them I do them not by the industry standard.

6 Likes

Can you clarify if you are talking about automated scripts here?

Are the screenshots used in Visual comparison tests?

3 Likes

How do you write test cases?

I donā€™t.

5 Likes

Thanks for pointing out this often occurring ambiguity of many phrases in testing.
I see how one can think that they mean automation code.
I also would like to have clarity here.

I guessed that he is not talking about automation, but about things I call ā€œtest manualsā€. A list of instructions (typically Action and Expected) which a person should execute and add their observations.

3 Likes

Apologies, Iā€™ve clarified in the original post, but I was talking about manual test steps. The screenshots are to identify what the expected result should look like.

2 Likes

Yes youā€™re right, I was talking about manual test steps, Iā€™ve seen steps and scripts used, but if scripts are more relevant to Automation, I may start just using ā€˜stepsā€™.

3 Likes

Is TestCompass like DesignWise? As I understand, donā€™t they generate test scenarios in a test model rather than specific manual test steps?

2 Likes

I do not get your intention here as this would be very ambiguous to me.
I suggest to always explicitly state when using ā€˜test/sā€™, ā€˜scriptsā€™, ā€˜casesā€™, ā€˜stepsā€™ if you refer to automation/code/development or manual execution of something. This phrases standalone mean different things to different people.

@sebastian_solidwork Fair enough. I only just started as a Tester recently so still getting my head around the different phrases. Asking various people seems to elicit different answers, which perfectly describes what you suggested, thanks.

You are welcome in the fascinating testing universe :slight_smile:

3 Likes

We use a good old Excel spreadsheet! (I do mostly manual testing, step by step) The spreadsheet has macros that build a dashboard with graphs on the first sheet.

2 Likes

Hi @baysha can you elaborate this please in short terms, Iā€™m interested in some details behind this statement ( how you justify this to your superior or your management ) if you understand me ? Thank you.

2 Likes

The short answer is ā€œit dependsā€!

If itā€™s a bug fix then Iā€™ll usually make notes in the jira or azure dev ops ticket as itā€™s (hopefully) a one-off test and not going to need to be repeated.

If itā€™s a new feature itā€™s usually worth writing test cases to ensure youā€™ve sufficiently covered the feature (or amending existing scripts if itā€™s a change to existing functionality) plus Iā€™d check and update any regression/smoke test scriptsused in manual testing for that area along with automated tests.

We donā€™t go heavy on test cases, just enough for what we need to prove successful implementation of a feature and encourage testers to use their imagination and initiative (otherwise it can turn into a documentation exercise where exploratory testing gets forgotten and less experienced testers forget to test outside of the script).

3 Likes

I donā€™t write them. Itā€™d be an exception if I did.

Words
First donā€™t worry about the terminology, everyone uses different terms. I donā€™t use ā€œmanualā€, for example, because I figure all testing uses tools of all kinds, and it makes automated testing sound more special or capable than it is. But I also get what people are trying to say most of the time, so I can get a conversation done just fine.

Test Cases
Writing down test steps for someone else to perform is a very costly, tricky and error-prone activity. Itā€™s where someone has a test strategy, so they write down what they think they might want to do to fulfil that strategy so that someone else can intepret what they said so they might understand what the author thought they might want to do to fulfil a strategy they donā€™t understand.

  • Writing things down takes time
  • Executing more steps takes more time
  • Filling in each step takes more time
  • Maintaining the cases takes time
  • Every detail you include can be interpreted differently
  • Every change made in the product has to be reflected in the details you add, or itā€™ll multiple peopleā€™s time to compare, decide and update them
  • Focusing on details causes you to miss problems
  • Following tedious instructions reduces engagement and attention
  • If you follow instructions you cannot follow up on new ideas, or use what youā€™ve learned to create new cases, without going outside of the cases. You also canā€™t ignore what seems low value or outdated without failing to complete them.

Forcing yourself to repeat your steps reduces the chance of you finding anything new, so more details encourages people to deviate less and find less. Even if you believe that itā€™ll increase the integrity of the tests and make them more accurate thereā€™s no guarantee that people will understand or interpret them the same way, or that the people that wrote it will still be working there.

Computers are for repeating tedious instructions, people are hired to think. You want to take advantage of their ability to learn quickly, see patterns, interpret their surroundings through an astoundingly complex set of heuristics and models to come to useful conclusions, all things that humans can both excel at and improve upon. Hiring people to be computers is wasteful, both in terms of the waste in extremely high cost, and the waste of the power and value of humanity.

My approach is:

  • Never write down a case unless it becomes necessary. Itā€™s costly and problematic.
  • State what I want from a strategy, not what someone else should do to try to fulfil it
  • Stay as high-level as I can, include detail when itā€™s necessary for clarification
  • Link to resources
  • Only note and report on whatā€™s demanded of me and cannot negotiate, necessary for the current process, or useful in context

Complexity
So if weā€™re going with a higher-level, human-based, defocus-friendly, deformalized system then it will fall upon testers to properly investigate and learn your product. If your product, or any part of it, is complex then that complexity should be a red light going off in the testerā€™s mental dashboard. Complexity is a risk. A good tester will be looking for any way to make that complexity more approachable. Manuals, documentation, training, diagrams, models, maps, websites, comparable products, whatever. One way is to put testers in front of the product and have them engage with it. You want to have insight into whatā€™s going on, so logging, distinct output, strong oracles, easy access to data, controlling and generating test data or anything else that can control states or affect configuration. Being involved early, to see the design and have a say in testability.

If I feel like I need low level steps or screenshots there are two questions I can ask:

  1. Why do I feel like I need these?
  2. Do they have to go into test cases?

Could you replace the steps with learning the product? A manual? Some training? A better or different strategy? Having better control over product state like save states or test data setups? Do the screenshots need to be in test cases, or can they exist in notes, knowledge bases, etc?

Are you using the test cases to help testing, or using them to tell people what to do? If Iā€™m using the cases to be told how to use the product I could be much more effective, or effective at all, if I just learned to use the product. I can leverage my understanding of the product and its context to evaluate risk and look for problems much more effectively.

Eventually you should end up with someone who knows why theyā€™re with enough skill and knowledge to decide how to go about testing so they donā€™t have to be told exactly what to do, and it will be much less crushing to the soul as a free bonus.

That makes all the formalisation unnecessary, everyone happier, everything cheaper, and youā€™ll find more problems, faster.

7 Likes

can you elaborate this please in short terms

Simple. I (usually) donā€™t use test cases at all in my testing. Never needed them (except where we need to coordinate testing between teams or with people outside our company). My managers donā€™t care as long as I get the work done.

Iā€™m not saying itā€™s a perfect approach but it works quite well for us.

4 Likes

@danirons ā€¦agree with exactly what @kinofrost says here. :+1:

3 Likes

Really interesting, thanks for this post. Would I be right in thinking that your approach is more like exploratory testing? Forgive me if that seems a bit naive, Iā€™m only able to really refer to the typical test techniques and standard processes that Iā€™ve heard of so far.

How do you deal with traceability, documenting and regression tests for example? Thanks.

2 Likes

Exploratory
Exploratory testing generally means two things.

The people that developed and expounded the term look at all testing as exploratory, as their definition of testing includes it. It then becomes about what scripting you choose to employ, where scripting is something that influences your testing but you donā€™t have control or choice about, one example being instructions that you must follow like written test case steps. The idea is that such scripts can be helpful but they are additions to testing, not replacements for it.

Thereā€™s also what you probably see more written about, Exploratory Testing (I use capital letters to differentiate it), which is generally some time a tester puts aside to reduce the amount of time, energy and focus they use on scripted factors. I donā€™t personally think that the term is particularly helpful, because I prefer to think of myself as the tester, and anything I want to use as a cost/benefit decision. I donā€™t put time aside to take responsibility for my testing, I just do it, whether that involves scripts or not. I donā€™t consider automation as software that does testing, I see it as a tool for which the inputs are a responsibility, questioning the purpose of the internals is a responsibility, considering the risks and limitation is a responsibility and interpreting the output is a responsibility. I will also basically never choose to use written step-by-step instructions because I see them as having enormous cost and risk with little benefit. Also Iā€™m an engineer - if I stop solving problems I start creating them.

Forgive me if that seems a bit naive

It doesnā€™t. The field is rife with both good ideas and many ideas without much research. Itā€™s confusing and difficult, and the industry can be very tolerant of mediocrity, usually through ignorance rather than design, so we have an ocean of possibilities to consider. I spent a decade working in and on the field of testing, this is just the stuff Iā€™ve collected while I was there. Ask me about the history of fashion and I will be asking what a culotte is - which I also had to look up the spelling for and still didnā€™t check the meaning. The worst part about testing is when you find out thereā€™s a lifetime of reading to do, but luckily, if you like testing, it becomes the best part.

Incidentally Iā€™m obviously not the end of the conversation for testing, you should take ideas from everywhere and make informed decisions.

Replacing Instructions

Traceability is basically being able to tie what youā€™re doing to why youā€™re doing it. Going off test instructions means that someone else has made the connection between requirements (written/explicit or otherwise) and the testing. The problem is that if you donā€™t take responsibility for that connection you cannot understand what youā€™re trying to do, which limits your ability to think of new problems or otherwise spend your time wisely. That means each tester has to be able to take requirements (written/explicit or otherwise) and make their own decisions about things like coverage. It becomes part of the job.

If you need to communicate what needs to be done, you could do that via charters, which are like high-level instructions that focus on purpose and risk. A charter might read ā€œTest that the login screen rejects non-valid logins. Focus on security concerns and other misuseā€ or, honestly, anything that you might need to communicate to someone to look into. The tester then tests against that charter, as their mission, with their domain knowledge and contextual understanding, and can tie back what they choose to do to their mission, and also to wider requirements. If youā€™re interested in being able to better trace the value of what you are doing while testing to your overall mission then I would recommend test framing as a place to start.

The advantages here are enormous, as testers can be told more simple instructions, at any level, and good testing can come out of it. If you have a hierarchical system of testers you will still need the upper tier to be good communicators and have a good understanding of risk and requirements, but the dangers of all of these are lessened. Coverage becomes adaptable, and self-healing as more problems are found. Risks become contextual rather than general, meaning you waste less time testing pointless things. The responsibility of how things are done shifts to skilled human testers, and allows them to use what they learn to improve what they do.

If youā€™re trying to move to a more deformalised system thereā€™s a few ways to go about that, you could move towards checklists, down the formality scale, including notes for anything that feels particularly important for whatever reason. This also helps us to think about risk more deeply. You could do some kind of coverage map, or move directly to charters, depends on your situation.

Session-Based Test Management is one way to document and manage testing. I tend to use a session sheet to document my own testing, but rarely use associated metrics and whatnot because itā€™s time I could be spending doing testing. Being able to communicate the story of what you did, what you know, what could be a problem, what you need, etc is its own skill, but comes pretty quickly. I used to write my notes up in OneNote, export as a PDF and attach it to a JIRA ticket, for one example, and this seemed to work well.

Another advantage of these free form notes is that I can use whatever I like to guide my testing instead of instructions. Charters, checklists, risk catalogues, user stories, older charters or ones from someone else. I can also record notes however I like. Screenshots, animated GIFs, screen recordings, associated test data, database records, anything that seems pertinent. If I wanted to and I had the setup to do so I could describe a test I did by including a virtual machine with the exact setup in it and a video of me doing it. Thatā€™s not usually worth doing, but it shows the possibilities.

Regression tests usually feel like theyā€™re more powerful than they are. Our ability to repeat a test is far from absolute. Repeatability is its own topic, but if youā€™re trying to deformalise regression testing I always think that the conduit between formalism and informalism is purpose. Translate the cases into purpose - what risks are the cases trying to mitigate? Sometimes youā€™ll find there is no purpose, sometimes youā€™ll find that they automate very easily, and some youā€™ll find that your testing is insufficient or wrong. When you decide on a system to mitigate change risk in a sensible way then that will cut down on attempts to retest every corner of your product, and you can cover purpose instead.

Donā€™t forget that while manual cases look appealing, and comforting, the risks still exist, in terms of miscommunication, time expense, limiting exploration and so on. Youā€™re not replacing a perfect system. You need to be able to let go of attempting to replicate situations with cases, and move towards trusting the ability of the testers. A good way is to communicate risk and mission more clearly, perhaps even with a checklist. You will also need testers who commit to being good at testing, because it will take motivation for some people to go off-script. It is more engaging and fun, though.

3 Likes