Ask Nishi a Question About: Reverse-engineer Your Way to Adopting a Risk-based Testing Approach

Our 4th conference talk for TestBash Home is “Reverse-engineer Your Way to Adopting a Risk-based Testing Approach” with the lovely @testwithnishi.

Risk is such a key part of software testing and development but sometimes it can be difficult to define or prioritise. It will be really interesting to see Nishi’s approach to this :slight_smile:

We’ll be adding all unanswered questions from the talk here so if we didn’t get to your question, don’t worry, we will find you an answer :grin:

If you’ve got a question you’d like to ask in advance, why not ask it now?

Unanswered questions

  • Nathan Owen: When do you determine the likelihood and impact analysis? It seems like it should inform plans for development as well as testing.
  • atiqah azlan: Should we consider ambiguous or more clarification needed storypoints and dependecies that one feature have with another as risk?
  • Ludmilla Chellemben: how long does it take to do risk analysis? Do u do it in all sprints? Do u have time for that? (partly answered. do u have team’s resistance?)
  • urvashi: How deep Domain knowledge is valid/required while doing Risk based testing?
  • Sandy H: How would you carry out risk analysis if you don’t use SCRUM? How would you plan risk based testing for a methodology such as Kanban, where work development is continuous rather than in sprints?
  • Rob: How do you deal with a combination of Acceptance Criteria & Risks? Both need testing.
  • Santiago: how do you factor in customer specific risk requirements? In one of your examples, localization had a low risk. But for a customer in that locale, that risk level would factor much higher.
  • Beth: Your analysis metrics of existing stories included defects found in - and out of - sprint. You assigned an ‘Opportunity’ level testing for an area that actually had the highest number of defects found. Does this metric in itself not imply that the additional test effort was warranted rather than an ‘Opportunity’ approach which would have released these defects out?
  • Imma: What kind of areas did you prioritize higher after implementing Risk-Based-Testing than before?
  • PD: is this risk based approach for testers or developers, because of the rating do developers change their development approach
2 Likes

Do you use RBT to decide which areas of the application to automate?
If the Risk Priority is low, is this a good enough reason to not automate that area at all?

A partial answer from my side:

Risk could be a factor in deciding which tests to automate. See for example this talk

The likelihood and impact analysis must be done during or just after sprint planning - as soon as you know the user stories and understand them - but before you begin working on them

Ambiguity surely does not help during the sprint - so like I said in the QnA session too - any stories that are unclear or ambiguous at the point of risk analysis raises red flags- it is not ready for development. Dependencies can be treated as risks if it is a major integration or major redesign and sure can be part of our impact judgement.

The story I told happened due to initial resistance by the team. Once we got into it,though, people bought into the idea. Especially because we did a very simplistic approach that took very less time.

Good question. I worked in Scrum mostly so I know from that perspective. For Kanban, we can actually look at risk analysis of each task / feature as we add it to the kanban board in ToDo and attach the Risk Priority Analysis to them, so that people have insight into it before it is picked.

Risk areas are actually not independent of Acceptance criterias. They could be part of it - or risk areas could lead to enhancing your acceptance criterais.

Actually in my analysis what i did was - The user stories and their risk analysis was done later - i.e. likelihood, impact, extent of testing was what we did as a part of that – while the Defect / test counts I got was from the history - from JIRA numbers of that sprint. The metric of tasks/defects indicated that we did more than needed tests on one story (which though good, could have been okay to find later too) while other stories got less than warranted (according to the RPN) extent of tests.

It is for the entire team, ,involves everyone on the project and brings them on the same page. That is the beauty of it :slight_smile: