First of all, welcome. I shall try to be as zen as I can.
You have come with a technical question. You shall leave with a moral one.
I know those algorithms to be wrong. Not because the calculations are incorrect, but because they describe the unknowable. Worse, they describe a value that will be weaponized against our fellow humans. To discover peace, we must understand not what we measure, but what measuring does to our world. When we shine light upon a particle to measure it we change what we came to measure, and so it is with metrics.
Here are a smattering of issues:
- The number of defects your client finds is not necessarily the number they report
- A defect report is not a defect
- A defect is subjective. It is a problem in the eyes of a human. Which humans found the defect? Is it a defect?
- Letting the test team know the algorithm will change their behaviour. They could report more defects that do not exist or are minor problems to increase their efficiency. This will distract them from finding problems that really matter to people that matter
- One test case is not like another. They are non-fungible. Counting them is silly - it is like counting clouds; it is not the quantity of clouds that matter but our memory of their effects, good and bad, before they left us. Test cases will have different coverage. They will have different sizes. They will have different run times. They will be run differently depending on who is running them. They may be dependent on something else in the project. They may be written at different levels of quality. They are but leaves of paper on which we inexpertly and naively engrave the predicted actions and observations of the full rainbow of human emotion and action. They are abstractions of artifacts, nothing more. See through the abstraction and a test case evaporates, and we are left catching clouds with a butterfly net.
- When you submit these metrics you will be creating a game. A game that you will encourage your test team to play, and win. If winning that game means doing poor work or upsetting them without cause then that is what will happen. You must be sure that the apparent innocence of such an action does blindfold you to cruelty.
Let us turn to mathematics to find insight into the language of our creation. One defect is not like another. Let us say that the test team report 100 minor problems - perhaps typos, UI alignment issues, things that the test team thinks a problem but the client doesnât, and so on. The client reports one incident where installing the software wipes their hard drive. Your defect detection efficiency is 100 / (100+1) * 100 ~= 99%. Do you feel that this number was helpful? Did it give insight into the quality of your team? What will you do with your 99%?
You are setting a goal to find 1 defect per case. Here is how to achieve your goal: have the programmers write in exactly one defect, execute one test case you know will find that defect. You have a 100% defect detection rate. If your software actually has no defects that no client ever sees and your client is very happy then you should punish your test team, because they failed to find the problems. You could argue that your developers are worthy of your ire for failing to write any defects.
You must find a way to realise the harmony of your team. Humans are not built for measurement and judgement and serfdom under a game of numbers. They must work together to build what they can. Let us not measure so, let us explore and play! Let us dance in the spirit of our work as testers and ask questions of our software, our processes and ourselves.
Metrics brings worry. Perverse incentives, inexpert measurement, inaccurate judgement of our fellow companions on this earth, the crippling constraints of working under numbers that we have picked from the air and to which we build idols, looking down upon us like Gods on the mountain as we fear their wrath and permit them to play with us for their sport.
Be as a pebble in a stream, and let the worries of metrics wash away.
Take the human approach. Let your testers explore. Take the locks off the cages of test cases and allow them to roam, free range, across the software. Give them enough guidance to achieve the sorts of coverage that matter to you, and then let them express their humanity and intelligence to work with your programmers to not only find more important problems faster, but problems not covered by the attempted repetition of identical tasks, and to find problems before they are even coded or conceived of by design.
Is it the flag that moves? The wind that moves? Or the mind?
Wind, flag, mind moves,
The same understanding.
When the mouth opens
All are wrong.