Experimentation can yield helpful results. Yet without trying something new we’d never know.
What testing experiments would you like to run with your team? What are you holding back on, and how come? Could be a new process, a different approach or something beyond testing that you feel would help.
The biggest one I’m looking forward to is having observability setup for tests.
Having a birds eye view of the type of tests we have, the coverage and the status of the tests, has been challenging, given we have 25 teams and we are a team of ~15 quality coaches. This will help us bring more transparency to what’s happening within the tests in all layers.
I started tracking root cause on defects. For every defect that is closed, I’m asking the developer responsible to do a root cause analysis (which they should be doing anyway to ensure they understand the problem) and documenting it as part of the fix. This includes where the problem initially cropped up (ticket/commit/project). As I’m collecting this information I’m starting to see patterns. It’s still early, but there are definite ways of organizing and presenting the results of what’s going on with our product from a dev/test cycle.
My plan is to start using this data to help coach testers in their blind spots / skills that require development. I think that by highlighting the patterns we can find ways to move individuals and the team towards better understanding of the system, faults, and available heuristics. At the same time, we’re able to better identify problematic code, functionality, and projects.
My biggest fear is that the data I’m collecting will be “borrowed” by others and used as a cudgel. I’d hate for it to become the driving factor being “fix this or else” types of management. I don’t think that’s a high risk, my company is pretty good with being positive and focusing on development.
That all sounds very cool, @davidshute. I look forward to an update on how things progress.
Would make for an interesting article too. Something like “Common patterns of root cause analysis in software testing and development”. For sure context is key to defining that, yet I wonder if you are able to extrapolate those patterns into context-agnostic observations that could help other testers and developers across the globe.