TestBash Spring 2023 - Prioritizing Your Tests by Considering Impact and Value! - Larissa Rosochansky & Rafael Cintra

During TestBash Spring, @lrosocha & @rncintra gave a talk on Prioritizing Your Tests by Considering Impact and Value!

We’ll use this Club thread to keep the conversation going, share resources and answer any questions we don’t get to during the live session.

1 Like

Questions Answered Live

@fullsnacktester - Could we take this same process, and use it to assess what product features we should implement, so we have a better chance of building the right thing?

@tressaking - I love the structure that you use and the way that you draw upon the 3amigos meeting. How do you first introduce this process to a team?

Questions Not Answered Live

Anonymous - What additional strategies do you recommend for those new to software quality, and looking for more effective ways to partner with our Product teams?

Anonymous - Given your shared years of experience, is there a key past (names excluded) mistake that you can share that inspired this talk?

Anonymous - Very insightful. Can you share any additional tools that you employ to make this process iterable via automation with cross-functional leaders?

Resources Mentioned

Impact Mapping: impactmapping.org/


Definitely. We believe a good understanding of the product and how this is used by those real people on the real is key to a good software quality process therefore be a lending hand to your Product team or teams. We like the idea of going to the field to understand your product better and talking to actual users. We know this may be harder with people doing remote nowadays but as someone who has done discovery sessions remotely we strongly believe this can be achieved. TL;DR: go meet and partner with your users to bring valuable feedback and connect with your product team. We are people, go talk to people. Don’t be afraid :slight_smile: They don’t bite.

Trying to test everything and trying to test what we think matters haven’t always led to efficiently testing what matters to our users. Applying new ways to prioritize trying to find common ground between what matters to us technically to what matters to who is using our software was our end goal with this talk. This leads us to better use our time testing and automating the right things at the right time, giving us focus and purpose on what needs to be accomplished and why when we are able to properly respond to the questions we demonstrated on the talk.

This process should be done frequently as you move forward with your regular user story refinements and sprint planning. What we would recommend for allowing this process to be more iterable is to record and track your classification and prioritization on each test on a centralized tool to allow quick review and reassessment as you go through the process. You won’t be able to automate your way out of the process and get together to diverge and converge on what the prioritization will be but you can track your data in a centralized matter to allow people to contribute with their inputs asynchronously. The tool is less important than giving people a voice to contribute. As long as your tool can allow a format where people are able to respond to our proposed questionnaire and plot the outputs on a grading scale to allow you to tag your buckets properly you should be fine.