One of the things I’m continuously asked for as a test manager is a map of the risk in the product. The idea is, this will allow both greater transparency of “problematic areas” in need of rework, and allow teams to make better risk assessments and decisions going ahead. Now, I have a few thoughts about this already (outlined below) but I do want to attempt it and see if there’s value to be had here. Thinking this through the most basic level of “product risk” as per the expectations of those asking for it is…
Incidence of error VS severity of error VS visibility of error
Number of bugs in an area VS how critical the area is to users VS existing tests/checks we run in the area
Couple of crude worked examples of that thinking…
Feature A is “low risk” because it historically has few bugs, moderate mission-criticality (an error here would be unlikely to require immediate patching) and reasonable coverage.
Feature B is “high risk” because it historically has moderate levels of bugs, high mission criticality (a problem here requires immediate patching) and low coverage.
I could proceed on that basis, map out the historic incidence of bugs, their significance, and response to them. It would take a lot of time. I am concerned it would be a low value exercise for a few reasons - just because we’ve had a lot of bugs in an area, does that really make it high risk? What if that’s the only area where we have been adding features, for instance? This could be mitigated perhaps by dividing by points or stories delivered in an area, but eh, the water gets muddy pretty quick thinking along those lines. Also, maintaining a map like this is an ongoing expense - we have to keep pace with the new bugs over time.
This is ignoring other issues, that… risk is not static or linear or like-for-like comparable, not entirely in the product or even similar for different teams with different experience levels, and a nagging question of whether a map even makes sense for the kind of data we’re describing here.
The problem I have, then, is low confidence that this is a valuable exercise, limited options in doing a “small scale” experiment to see if it does provide value (I can’t see how this works unless I go “all in”)… but few other suggestions for how I can provide the stakeholders this same value of at-a-glance visibility of risk from which to both track trends and forecast this in new work.
Caveat - I absolutely have an idea of the areas I feel are high risk, backed by a large amount of extant data which allows me to do my job. But this knowledge is human, fluid and more nuanced than any distillation I can imagine right now. I don’t see an obvious way of packaging something this many-layered into something others could just glance over and scoop up, and I’m concerned that this “dead data” could actually be detrimental - actually, I/testers/quality focused individuals in teams provide this value best by being a human who is available, keeping track of change and working with other humans to help them work through issues of risk, develop their own understanding and help design our approach to product changes.
Would love any thoughts, ideas or opinions - or alternatives, to provide that same value.