@mcgovernaine recently thought of a brilliant question to ask you all
How do you alter your stopping heuristics depending on the project?
For added context, Áine is thinking, for example, an automation project, how much do you automate? Versus new project - how do you decide that you’ve tested it enough? Compared to a patch or an update?
So how do you work out your stopping heuristics depending on the project?
1 Like
Stopping heuristics that I use regularly:
- “We’re going to production with what we have now”, or any other external influence/stakeholder that informs me that my testing activities will not be considered valuable
- When another project/test becomes more important than the one I am doing now
- When the last x tests/sessions I ran only discovered info that I already knew, and I need to take a step back (defocus) until I have some new inspiration/data that might lead to an interesting test (where x = variable depending on work load & energy levels)
Usually, these three work for any testing project.
For automation, the heuristics would be different, but with a similar focus: what is the most important test that I can automate now? So I would probably use stopping heuristics like:
- When an external influence makes it so that any additional work that I do will not impact the (future) quality of the product
- When I get other priorities
- When any automation, that I can think of now, will not add additional value; then I will have to defocus, and maybe do some testing, which can lead to new insights for which some automation work would be useful
So, in a sense, I use the same general heuristics on every project. The context can change what they mean, though.
3 Likes
Not a specific heuristic but I call it the Popcorn effect. It’s the idea that your testing goes from where a lot of the popcorn (bugs) pops, and after a while the time between pops are longer and longer, and if you wait to long they will burn. So you have to stop when the interval is to long.
Another idea is a concept a colleague of mine introduced to me which is the ROI maximization problem. The idea is that you have bugs in your product and they have a value (money saved) and you spend money to find them, cost of testing. And given that you have a strategy that aims to find the highest value bugs first and that your cost of finding “a bug” have a stable cost. You will get a graph over money spent where in the beginning you produce a lot value by finding bugs that are worth more than what they cost to find, and you then move to less and less return of the investment. And after a while you will come to a point where it will cost more to find the bug, than what you save by finding and fixing it.
So pairing the two you stop when you cost more than you save.
3 Likes