I’m going to muddy up your metric here, just a bit.
If tester #1 (George) has reported 10 “defects” (I’m totally not sure you want to use the word defect here) and 8 of these are fixed and…
Tester #2 (Carla) has reported 2 defects, and both have been fixed, then who is the higher performing or better tester?
Now what if Carla has actually seen 15 defects, but rather than putting them into a system and hoping that they were fixed, she had a conversation with the programmers who saw that the fix was simple and fixed those extra 13 defects without putting them into the system? It would not help the teams performance or efficiency to put in the bugs and their fixes after they have been resolved. In fact, since the programmers might be measured by number of bugs introduced, reporting these extra-now-fixed issues would reflect negatively on them, even if they are all minor.
Now what if George has only reported minor issues, while the two from Carla are difficult-to-reproduce blocking issues. How would you measure them then?
Now what if George and Carla are actually working together, and agreed that George is the one who should report the issues? Would they need to make an arrangement with management about how they are dealt with?
Now what if Carla has only reported 2 issues because her investigation took an absurd amount of time?
Now think about the time that you are taking with what-ifs and metrics, and compare it to the amount of time it would have taken to talk to your team about their performance(either individually or in groups)?
And finally, think about why you would need to make these measurements, and how you would communicate your reasons to the team in a non-threatening way. Because measuring their performance against each other does erode trust within the team, and encourages silo-ing.