Learn to do requirement review better

Anyone know any good resources for learning/teaching how to do requirements reviews… what kind of things to look for? Seems to be plenty on “how to run a requirement review process” but not really how to train people to be good at it. Like… make sure its testable, make sure you can understand what it means, etc.

2 Likes

By content:
I suggest to empathize with the user and imagine the future use of the function.
What problem has the user and how we intend to help him.
Especial testers should imagine what potential problems can occur, which will keep the user from solving his problem.

By process:
An important point in general is that the team should have a good shared understanding. Look for signs which speak against that and bring them on the table.
You can not have 100%, but at least more obvious differences should be cleared.

3 Likes

Not so much a specific training tool yet the Feature Chat Sheet provides a good set of prompts for people to try out and get good at requirements reviews.

1 Like

I cannot not add 6 thinking hats as a trick to have up your sleeve . Six Thinking Hats - Wikipedia . Uncovers loads of assumptions, but requires that 3 or more people are writing the requirements down for the requirements to really be something a team can implement. To be fair I’ve worked in very few teams where externally written requirements are worth actually comit-ing to paper, they are hard.

I am creating a 5 part blog series on this topic.

This is part I: Requirement Gathering Blog Series, Part 1: Understanding Requirements - The Test Tribe

1 Like

As well as the excellent things people have already suggested, I find it helpful to think about users, data and interfaces with other code.

Think through the different kinds of user (personas, if you have them). How will each kind of user interact with the change? Will things get better/worse/different for them?

Think about the major lumps of data that already exist in the system and the changes needed to support the new requirements (at a high level, rather than details). Consider the life cycle of each lump of data - from the point it enters the system to when it leaves.

Think about the main interfaces with external code - calling their API, them calling our API, importing/exporting files etc. (Like with data, just at a high level.) Do these interfaces need to change to support the new requirements?

How will the new feature interact with existing features? Is there duplication? Which bits of e.g. user interfaces would be the best home for the changes?

1 Like

Here is a tip from the forum:

1 Like

I do exactly this, plus a checklist of ideas more specific to the risks (including previous problems) of the team.

1 Like

Being good at evaluating software in context is a lifetime skill, so it’s difficult to provide a quick answer. That’s one of the benefits of having a skilled tester. The first tip is to practice.

Documented requirements are often bad, hard to interpret, requires domain understanding, incorrect, conflicting, out of date and so on. People are using their imagination to come up with these ideas, and people with important ideas don’t necessarily know how to formulate or communicate those ideas. I think that’s a practical issue we solve by developing our understanding of requirements and risks as we progress. Not all requirements are equal, so find the ones that are complete necessities and start there. It’s useful in test framing to have contextual information on high-level requirements to direct testing to where it has most value. It isn’t, however, generally, about predicting exactly what the end product will be in its entirety. So you could have higher expectations than necessary.

Remember that we don’t have all the requirements until the testing is complete. There will be tacit, unknown, unstated, unshared requirements of different understanding and availability to each person working on this project. The best way to get to a set of good requirements is for testers to work on less good requirements. It’s about refinement and evolution, not about stating everything up front.

You need to consider your context and where information can come from and where that might be practical. Make sure the right people are present or represented. You cannot have a tester espouse for testability without them being involved in that kind of design process. Customers, PMs, POs, sales, support, developers, testers, ops, UX, whoever might have a stake or input could be useful.

Consider reference material you already own. Sales materials, website claims, user manuals, code, code documentation, logs, bug reports, retrospectives.

Consider sources you have access to. Competitor’s products, compliance info, regulations documents.

From a testing perspective knowing what we state that the product does or does not do is fine, but what’s more interesting is if there are problems, and what damage those problems could cause. Requirements are one input to my consideration of risk. Flaws in the requirements are also a source of information about risk - if the requirements documents are confusing or conflicted then I’d be concerned about how the product is being developed, and I’d collect information and tell people who matter because finding project issues is within my concern. Those people need information to make decisions about the product and project.

Testers should be in a position to help formulate good requirements documents. To help reduce uncertainty in the wants and desires of everyone involved in the complexity of software development and point towards what’s important in development and toward a wider understanding of risk. To go out and ask questions to help find and communicate perspective and knowledge. The idea is to come to an understanding of the direction of travel to make the journey easier, factually and emotionally where appropriate. In order to help those that define the product - those that design and build it - we as testers should understand it and the context around it as much as is practical. As such if you have good testers it might pay to ask them for their input into requirement reviews, because if they’re good testers they have experience in evaluating risk in context. This will require giving the testers the documents in good time and then inviting them to the meeting, which sounds insanely obvious but experience tells me it is not.

Identify what’s an important requirement and what’s simply useful information. This will help to show you where the big risks are, what’s really important, and who we really care about.

When I’m considering risks and requirements I’ll refer to a lot of things to help inspire me. I usually begin with as much understanding as I reasonably can get about what the product is supposed to do in a general sense, who will buy it, and what that world is like. I’ll note down risks associated with the general situation. I use the question “what can go wrong here?” a lot, and the critical thinking “huh? really? so? and?” to ensure the information can somewhat be trusted. I look at the bug chain to consider weaknesses, failure points, situations, victims and problems. This is Inside-Out risk assessment, and I also use Outside-In risk assessment by considering possible risks that might match the situation. Quality criteria, generic risks like upstream dependencies, risk catalogues about this particular domain, previous issues, issues and failures other companies had (where available).

This whole process can scale up to a product kick-off in a general sense and down to the specifics of a function, and the sampling has to adjust to the situation to be pragmatic. Some of the information will influence design, guide development and build requirements documents. Some of it will be used in later kick-off meetings to prompt development that’s considerate of that risk. Some of it’s used in hands-on testing. Some of it goes into a risk catalogue document for this project. I take what I find and have conversations with people until we come up with requirements that make sense, but all of my findings still have value, including those that are refuted because I may have to change my thinking, distrust a source, or take a side on conflicting requirements.

Throughout this I’ll be able to add, remove, refute and question all of the stated requirements as suitable.

I’ll take any questions, there’s a lot I’m skimming over.

4 Likes

Finally documents, which contain requirements, are just artifacts of communication.
What matters is that everyone understands good enough the problem to solve, the solution to implement and the work to do.

Do not concentrate to much on documents. What matters is who wants to communicate with you by this documents.

I strongly advocate for the agile principal “Individuals and interactions over processes and tools”.
Here I see requirements, and specifically in written form, as part of processes.

3 Likes

You can partially automate requirements reviews with ScopeMaster (AI requirements analyser). It also auto-generates test scenarios instantly from the requirements, that are fully traceable back to the requirements. (Nb, I work for ScopeMaster)