Performance testing ... defects / person

Hi everyone- wanted to run a simple question to your dear MoT members.
I’m currently looking at a way to capture an efficiency metrics that can capture the number of defects resolved by team members.

The issue with this metric is that it can improve in case 1. # of defects decreases, 2. if my employees are better at resolving defects and therefore i need less persons or 3. if i hire more people. The third instance is not good though as it doesn’t help measuring neither performance (1.) nor efficiency (2.)

Anyone has any thoughts on how to normalize this? Thanks a lot, Gian

I’m going to muddy up your metric here, just a bit.

If tester #1 (George) has reported 10 “defects” (I’m totally not sure you want to use the word defect here) and 8 of these are fixed and…
Tester #2 (Carla) has reported 2 defects, and both have been fixed, then who is the higher performing or better tester?

Now what if Carla has actually seen 15 defects, but rather than putting them into a system and hoping that they were fixed, she had a conversation with the programmers who saw that the fix was simple and fixed those extra 13 defects without putting them into the system? It would not help the teams performance or efficiency to put in the bugs and their fixes after they have been resolved. In fact, since the programmers might be measured by number of bugs introduced, reporting these extra-now-fixed issues would reflect negatively on them, even if they are all minor.

Now what if George has only reported minor issues, while the two from Carla are difficult-to-reproduce blocking issues. How would you measure them then?

Now what if George and Carla are actually working together, and agreed that George is the one who should report the issues? Would they need to make an arrangement with management about how they are dealt with?

Now what if Carla has only reported 2 issues because her investigation took an absurd amount of time?

Now think about the time that you are taking with what-ifs and metrics, and compare it to the amount of time it would have taken to talk to your team about their performance(either individually or in groups)?

And finally, think about why you would need to make these measurements, and how you would communicate your reasons to the team in a non-threatening way. Because measuring their performance against each other does erode trust within the team, and encourages silo-ing.

2 Likes

Thanks a lot Brian.

I’ll try to make my example a bit clearer and remove any personal intention from the testers.
Assume that:

  1. You run a shared service organization (Company A) that performs accounting activities for a 3rd party (Company B).
  2. One of those activities requires your team to correctly record intercompany transactions e.g. transactions that happen between 2 departments of Company A (e.g. Company A1 in US produces black lamps that are sold to Company A2 in France)
  3. Due to various reasons, there could be disconnects between those divisions (e.g. quantity on the invoice billed by A1 is different vs. quantity in the purchase order issued by A2)
  4. As part of the services offered, Company A needs to solve those intercompany disconnects
  5. Company B wants to know how efficient your team is and has asked to produce a couple of metrics to show that

If you use (# of defects) / (# employees) that metric is not helpful as it could improve by the following reasons:

  • # of defects decreases … while this is a good reason for the metric to improve, this improvement is not driven by the efficiency of your team to resolve issues
  • # of employees increases … that also doesn’t mean your company is more efficient but quite the opposite

My question is … what formulas / metrics could Company A use to capture correctly how efficient the team is in (i) reducing the number of errors and (ii) resolving errors? I suppose there’s a good way to normalize the factors to correctly show this, but i struggle to think of it.

Thanks!