How do you document and communicate risks?

Is it a risky business to document risks? :sweat_smile:

I’d like to hear your anecdotes and stories. How do you document and communicate risks?

  • How do you tailor risks to the appropriate audience?
  • What forms of documentation have you used? i.e. risk register, risk matrix, risk policy docs, etc
  • How does documentation map to your testing efforts i.e. test cases, charters, etc
  • Who tends to be the recipient of your documentation?
5 Likes

I’ve found success running focused risk assessment meetings covering key areas: Operations, Data, Functionality, Security, and Confidentiality. Initially, I tried covering more ground, but learned the hard way that longer meetings led to diminishing returns and glazed-over eyes! :sweat_smile:

Our approach:

  • Keep meetings focused on these core risk areas
  • Document findings in a shared Google Doc
  • Open it up for team comments and contributions

The good: This structure keeps things manageable and gives everyone a voice. The challenge: Getting consistent engagement in the comments (still working on this one!)

Curious - Any tips for better engagement with the documentation? :thinking:

3 Likes

Hey @simon_tomes

fantastic question :clap:

:black_circle: I understand the gravity and powers of stakeholders in the ongoing solution, between I also talk with them to understand their potential technical or non-technical interests in the solution. After these steps and listings key persons, I keep them updated time to time specially with gentle reminder if they miss.
:black_circle: I maintain a Risk Traceability Confluence page, in which what all you said is present along with mathematical calculations of total risk, adhoc spiked risk, long term risks and coincidences which may affect business.
:black_circle: I use a a embedded jira query in confluence page to pop up the numbers which explains the efforts.
:black_circle: Whole solution technical team and including what said in the first step. By this even the most junior member of the team feel part of important information which is shared across.

Hope this will add value to post.

:v:

2 Likes

Surely not! :slight_smile: Imagine if you do not document them and something goes wrong in production and you are the guy saying " oo yea I knew that could happen but we didn’t do anything with it"

Instead of saying " yea we documented it and we, as a team, decided to accept the risk of this happening"

Based on Probablility & Impact.

  • How likely is it to occur
  • How big is the impact if it occurs

We write down what the impact is, who is impact, who can trigger it etc etc…

Then it goes to the Product Owner and since he is the lord & savior of the product, he decides if it should be fixed or not. UNLESS it’s orange or red on the scale, then it needs to be solved asap.

3 Likes

I find that very few engineers explicitly capture risks either in a jira ticket or in a testing/release report - heck most releases don’t have much time to write up lessons learned or quickly list and dispel any fears. Often the next project is already underway on the day you ship. Many risks can be mitigated, but all have cost and that’s why engineers who are just bricklayers are not well placed to balance those costs or comment on them, let alone act on all of them, some, but not all.

It really does require an engineer to also step outside of themselves to work out what the actual risk is, we don’t know many things about the likelihood of a failure without any metrics to back us up, or loads of experience of similar flaws. The engineer also has very little grasp of the economic damage a fault causes in reality, broadly yes we do, but not with any kind of repeatability on a scale of 1-5.

My answer is thus, carefully, in the testing plan itself.

3 Likes

I talk about risk a lot, as I like to use risk based prioritisation using a matrix like this.
image

A matrix like Kristof shared is a great way to capture and think about the actual underlying risk though. Depending on the product, I also like to include breadth of impact as a third dimension. It helps to consider how much of your user base is impacted by the issue should it occur. If the impact is to many of your users, than even a more minor impact could be highly damaging to the product. Where as something that may impact only a very small percentage of users, even if the impact for them is severe, may be less critical overall.

4 Likes

Ah, yeah. And your blog post on this topic is excellent, @stuthomas.

2 Likes

Awesome question and good replies already!

This is something I’ve been pondering for some years(?!) and I think I’m getting somewhere.

Risks are the lifeblood for testers. Not test cases, not bugs. Risks. Because at the end of the day, that’s what people care about; the result of a risk manifesting can be detrimental. Clients leaving, missed income, huge costs, lawsuits,… That’s what keeps leadership awake at night.

In essence, tester’s efforts should be linked to realistic risks as much as possible. Test reporting should report what is being done to mitigate risks and risks should ideally be quantified in a meaningful way.

With RiskStormingOnline, We’re building a Risk management/reporting module that gives us a way of documenting:

  • What quality aspect is important
  • Which Risks did we identify that it can impact
  • What did the team identify as tasks to mitigate these risks
  • Did we accept/delegate/mitigate/… the risk?
  • Current status of the tasks we set out to do

This should give quality professionals (or anyone who manages risks on their projects) the possibility to document the team’s progress and the ability to report at any given time what the status of risk is for the project; e.g. the moment you want to release to product; “what are we risking right now?”.

Here’s a mock- up, we’re still building/conceptualising, so if you have ideas, let me know!


This mockup would be scrollable and give you an overview of all identified risks in a riskstorming session.

4 Likes

Exactly.

However, I don’t find these kinds of matrices very helpful. I tried to use one for a while and learned that the likelihood rating was always just my gut instinct. Devs often underestimate likelihood, which is understandable. I ended up getting numbers that didn’t mean much to me or anybody else.

Instead, I focus on each item in the sprint, considering what feature(s) it could impact. I list those out in a document that is shared with the whole team. Then I link our routine regression tests to the sprint items they relate to.

Each of our routine tests has several depth levels: Optional, Brief, Basic, In-Depth, Exhaustive. If a routine test is linked to several release items, or if it is linked to a large release item that has a lot of code changes, I’ll set the depth level to Exhaustive.

This isn’t the only way I assess risk or account for it, but it does help me document which areas got the most attention and why.

5 Likes