I wonder if anyone can help me. We have had a fair amount of devs firing poor code over the fence to QA. This mostly relates to the fact their end of year performance reviews are based on shipped projects/features/fixes. So very obvious that it’s quantity over quality from the developers.
Now some are better than others and prefer to get good code out the door, but some do not. As a result I’m being tasked with creating a review that the tester fills out once a project has passed QA. The idea of this is to create a score for each project and from that we can pull an average for each developer to get an idea on what they are doing wrong.
This needs to be incredibly simple and quick to fill out. Currently I have the following questions:
Did you have to contact the dev for more info before you could start? Such as no documentation, Branch name etc, feature not loading…
Were there obvious bugs spotted that should have been seen by dev before handing over to QA?
Were things sent to be retested only for them to not actually be fixed or just partially fixed?
How many full QA rounds did it take to pass? 1-3 / 4-6 / 7+
Complexity: 0 - 5 (5 being most complex)
What are your thoughts on these questions? I need to find a way to score the developers and also to use complexity to lower them slightly depending on how complex the project was.
I tried giving each question the same points and then assigning a % to the complexity (-10%, -20% etc) and taking that off the score. Bu have not been happy with the results.
Any help would be hugely appreciated. Does anyone else on here have to review projects/developers?