Learn to do requirement review better

Being good at evaluating software in context is a lifetime skill, so it’s difficult to provide a quick answer. That’s one of the benefits of having a skilled tester. The first tip is to practice.

Documented requirements are often bad, hard to interpret, requires domain understanding, incorrect, conflicting, out of date and so on. People are using their imagination to come up with these ideas, and people with important ideas don’t necessarily know how to formulate or communicate those ideas. I think that’s a practical issue we solve by developing our understanding of requirements and risks as we progress. Not all requirements are equal, so find the ones that are complete necessities and start there. It’s useful in test framing to have contextual information on high-level requirements to direct testing to where it has most value. It isn’t, however, generally, about predicting exactly what the end product will be in its entirety. So you could have higher expectations than necessary.

Remember that we don’t have all the requirements until the testing is complete. There will be tacit, unknown, unstated, unshared requirements of different understanding and availability to each person working on this project. The best way to get to a set of good requirements is for testers to work on less good requirements. It’s about refinement and evolution, not about stating everything up front.

You need to consider your context and where information can come from and where that might be practical. Make sure the right people are present or represented. You cannot have a tester espouse for testability without them being involved in that kind of design process. Customers, PMs, POs, sales, support, developers, testers, ops, UX, whoever might have a stake or input could be useful.

Consider reference material you already own. Sales materials, website claims, user manuals, code, code documentation, logs, bug reports, retrospectives.

Consider sources you have access to. Competitor’s products, compliance info, regulations documents.

From a testing perspective knowing what we state that the product does or does not do is fine, but what’s more interesting is if there are problems, and what damage those problems could cause. Requirements are one input to my consideration of risk. Flaws in the requirements are also a source of information about risk - if the requirements documents are confusing or conflicted then I’d be concerned about how the product is being developed, and I’d collect information and tell people who matter because finding project issues is within my concern. Those people need information to make decisions about the product and project.

Testers should be in a position to help formulate good requirements documents. To help reduce uncertainty in the wants and desires of everyone involved in the complexity of software development and point towards what’s important in development and toward a wider understanding of risk. To go out and ask questions to help find and communicate perspective and knowledge. The idea is to come to an understanding of the direction of travel to make the journey easier, factually and emotionally where appropriate. In order to help those that define the product - those that design and build it - we as testers should understand it and the context around it as much as is practical. As such if you have good testers it might pay to ask them for their input into requirement reviews, because if they’re good testers they have experience in evaluating risk in context. This will require giving the testers the documents in good time and then inviting them to the meeting, which sounds insanely obvious but experience tells me it is not.

Identify what’s an important requirement and what’s simply useful information. This will help to show you where the big risks are, what’s really important, and who we really care about.

When I’m considering risks and requirements I’ll refer to a lot of things to help inspire me. I usually begin with as much understanding as I reasonably can get about what the product is supposed to do in a general sense, who will buy it, and what that world is like. I’ll note down risks associated with the general situation. I use the question “what can go wrong here?” a lot, and the critical thinking “huh? really? so? and?” to ensure the information can somewhat be trusted. I look at the bug chain to consider weaknesses, failure points, situations, victims and problems. This is Inside-Out risk assessment, and I also use Outside-In risk assessment by considering possible risks that might match the situation. Quality criteria, generic risks like upstream dependencies, risk catalogues about this particular domain, previous issues, issues and failures other companies had (where available).

This whole process can scale up to a product kick-off in a general sense and down to the specifics of a function, and the sampling has to adjust to the situation to be pragmatic. Some of the information will influence design, guide development and build requirements documents. Some of it will be used in later kick-off meetings to prompt development that’s considerate of that risk. Some of it’s used in hands-on testing. Some of it goes into a risk catalogue document for this project. I take what I find and have conversations with people until we come up with requirements that make sense, but all of my findings still have value, including those that are refuted because I may have to change my thinking, distrust a source, or take a side on conflicting requirements.

Throughout this I’ll be able to add, remove, refute and question all of the stated requirements as suitable.

I’ll take any questions, there’s a lot I’m skimming over.

4 Likes