Methods for test design for coming up with test scenarios

Hi all,

I’m a QA Manager who is training up a young QA team with little experience with QA (their 1st job in QA). One of the areas/skills I would like them to develop is test design, the ability to design good test cases for a good coverage without duplication.

When I first started out, I used to write too many test cases with alot of overlap which isn’t necessary a bad thing, as over testing is better under testing but soon realized that it wasn’t time effective for regression.

Here is how I approach test design for a feature/user stories:

1.) Collect requirements
2.) Map out of test scenarios in a mind map using Miro with open mind with no limits for as many scenarios as I can think of. brain dump.
3.) Using the requirements as a guide for positive scenarios (happy path) and inspire other scenarios
4.) Then I map out any non-happy path scenarios (things that most people wouldn’t do)
5.) Review and map any non-functionality scenarios if needed.
6.) I would review all of the tests to determine if they are really needed (a edit). If needed, what type of testing they would be needed for feature or regression testing? To group into these two groups. Feature is one off test for testing for the sprint whereas regression testing is for maintenance going forward.
7.) Once written in test case management, I would set the priority or regression pack field per regression test so you can design regression packs as you create your tests.

This is my approach I’ve used over the years, how do you approach your test design? What is your approach?

Let me know your thoughts.

Thanks,

Chris

Have a read over this. Probably some overlap with your thread.

This is probably the first time I’ve thought about and written this out(in a few years, since my last framing of the approach). In simple terms and flow it would go like this for the products, projects, and company I’ve worked with in the past years(any other new company might demand slight changes to the approach, otherwise some incompatible things that need to be solved will arise):

Context familiarity:
Learn, chat, peek, discuss, read, watch, question, etc… many things I see around the idea to be developed and its growth into the product along several stages.
By things I mean a solution design document, a UI mock, a schema, a whiteboard discussion, an initial requirement from stakeholders, follow-up changes or demands of stakeholders, a prototype started by devs, similar ideas/features in the product, company or outside on competing products, an initial specs sketch and review with business, a solution review meeting with several technical people and/or business, get access, dig through, tour, experiment and learn about the components that are already there and should be used(internal or external systems), and so on… - be part of as many as possible. In some projects, you get a few of these, in others, you could be part of hundreds. Be curious, helpful, and of trust/confidence to others and you’ll get involved in more.

Setup:
Assuming there’s something being developed currently or has a first version from development. With knowledge of the context, I start collecting and configuring what is actually there: specs, req, systems, tools, docs, code, project env., interview the devs, and collect the latest undocumented demands or changes. Create notes of these all for the things in whatever form text doc., note, mind-map, wiki…

Gap and risk analysis:
Make some sort of product coverage outlining, while doing some touring(like FCC CUTS VIDS) of the product/feature and start asking yourself interesting questions(5w) that might reveal potential problems to the value of the product or project. Let the risks out at this stage and/or immediately after in a specific session. Also note some of the gaps of things that might have been missed/implicit/misinterpreted.

Designing the approach and testing:
Use the questions, and risks to drive the generation of test ideas that could uncover useful information. Organize testing in sessions (SBTM or a variation of it). Go through some heuristic strategy overviews or create one (HTSM or similar) if needed to remind yourself to be structured. Have regular feedback with devs, business, and product managers on your approach, progress, and outcomes(issues with testing and product/questions). Use the additional information from them to feed your learning engine, and drive the generation of more ideas. Some version of explicit requirements is but just a single data point to be used in testing, a fallible oracle, and it’s recommended to use dozens of others as well

Confidence:
Repeat through several iterations of the above, until you and your team are confident you managed to deal with and understand the outcomes of the most important risks in the given time available.

Based on the number of features, products, projects, or teams you’re in as a tester, some activities might have to be skipped, some done quickly, some shortcuts would be required in some, relying on other persons, or doing some things in a slightly different order.

2 Likes

A lot of wise things have already been said, Requirement based tests, error cases and so on.

Just two small remarks, the evolvement of test cases might me somewhat dynamic if you are in an agile group with a stakeholder that wants a lot of changes

When you write the test cases, take some thoughts to what one or several regression test suites would look like. I try to write those suites as the project evolves.