I partially implement that as part of scrum ceremonies and it worked well for relatively new features or when there were big updates from 3rd party integrations.
Sprint Planning: timeboxed brainstorming risks for this sprint Retrospectives: in one team we actually committed the whole retro to risk storming or talking about how to handle it and actually implement proper processes around it
Great question! At my company, we apply a structured Risk-Based Testing (RBT) approach during test management. Our main approach consists of a multi-layered risk assessment model, ensuring that risks and tests evolve alongside the software and paying attention that risks are assessed from different perspectives.
Thus, we continuously review risks and update tests for each version of the application to be tested:
We align testing priorities with evolving business risks by continuously reviewing impact and likelihood (which is recently supported with an automatic algorithm to make evaluation more robust and to reduce manual effort on tester side).
We assess introduced changes - not only by looking at directly affected parts of the application but also on implicitly affected functionality by taking assumptions via content proximity. There is also a tool-based support on assigning a more objective weighing to these findings by applying further data from ticket systems.
By analyzing past test results and defect trends across severall release versions, we identify areas with recurring issues and dynamically refine our tests to improve risk coverage and avoid redundant testing.
Our goal is to integrate risk assessments into test selection, refinement, and execution planning, ensuring that testing provides trust (and - from manager perspective - keeping testing lean, risk-aware, and strategically focused).
Keeping tests aligned with evolving risks is an ongoing process. One thing I always do is regularly revisit test cases after every major release. Even a small feature change can sometimes introduce unexpected issues in existing functionalities.
One such scenario was when we introduced a new landing page with animations to enhance user engagement. The focus was on UI but during exploratory testing, the animations caused delays in rendering interactive elements, the CTA button wasn’t clickable creating a frustating user experience.
To address this, I uodated test cases to include validation for animation performance, load time across devices and collaborated with devs to implement a skip animation option for user prefering a fast experience. Since then i have made it a practice to consider revisiting test cases, performance and accessibility.