Mapping Product Risk

One of the things I’m continuously asked for as a test manager is a map of the risk in the product. The idea is, this will allow both greater transparency of “problematic areas” in need of rework, and allow teams to make better risk assessments and decisions going ahead. Now, I have a few thoughts about this already (outlined below) but I do want to attempt it and see if there’s value to be had here. Thinking this through the most basic level of “product risk” as per the expectations of those asking for it is…

Incidence of error VS severity of error VS visibility of error
so…
Number of bugs in an area VS how critical the area is to users VS existing tests/checks we run in the area

Couple of crude worked examples of that thinking…

Feature A is “low risk” because it historically has few bugs, moderate mission-criticality (an error here would be unlikely to require immediate patching) and reasonable coverage.

Feature B is “high risk” because it historically has moderate levels of bugs, high mission criticality (a problem here requires immediate patching) and low coverage.

I could proceed on that basis, map out the historic incidence of bugs, their significance, and response to them. It would take a lot of time. I am concerned it would be a low value exercise for a few reasons - just because we’ve had a lot of bugs in an area, does that really make it high risk? What if that’s the only area where we have been adding features, for instance? This could be mitigated perhaps by dividing by points or stories delivered in an area, but eh, the water gets muddy pretty quick thinking along those lines. Also, maintaining a map like this is an ongoing expense - we have to keep pace with the new bugs over time.

This is ignoring other issues, that… risk is not static or linear or like-for-like comparable, not entirely in the product or even similar for different teams with different experience levels, and a nagging question of whether a map even makes sense for the kind of data we’re describing here.

The problem I have, then, is low confidence that this is a valuable exercise, limited options in doing a “small scale” experiment to see if it does provide value (I can’t see how this works unless I go “all in”)… but few other suggestions for how I can provide the stakeholders this same value of at-a-glance visibility of risk from which to both track trends and forecast this in new work.

Caveat - I absolutely have an idea of the areas I feel are high risk, backed by a large amount of extant data which allows me to do my job. But this knowledge is human, fluid and more nuanced than any distillation I can imagine right now. I don’t see an obvious way of packaging something this many-layered into something others could just glance over and scoop up, and I’m concerned that this “dead data” could actually be detrimental - actually, I/testers/quality focused individuals in teams provide this value best by being a human who is available, keeping track of change and working with other humans to help them work through issues of risk, develop their own understanding and help design our approach to product changes.

Would love any thoughts, ideas or opinions - or alternatives, to provide that same value.

To be perfectly frank, I have mostly the same questions as you.

I have, to date, never seen a product risk mapping scenario which held more value than a short conversation with other team members. Also, the only teams I have participated in which used these maps were also teams that said "The certification says we must… " or "The standard says we must… " without even questioning why.

Then we spent more time discussing why something is a risk rather than mitigating the perceived risk.

Then we pretty much put the usual suspects in the testing-risk-matrix.

Then we don’t use the perceived risks which were documented to plan our testing efforts.

In other words, I don’t know how to do it correctly, but I can tell you a dozen ways of doing it wrong.

In my current position, I have the flexibility to leave out the mapping / matrix / excel sheet / whatever and present product risks in another way. I include them in the stories about the quality of the testing and the quality of the product. That is, when we plan the testing activities, we identify risk areas and include them in one step (i.e. “Session {ABC} is checking the {high risk scenario}”). Then I let the reader assume that we actually thought about the risk.

This works in our small team, for now. In some of the larger teams with more communication issues, this method may be problematic.

1 Like