This was asked in the MoT Slack, but I think it’s worth bringing it out here to The Club.
At my company we are constantly being asked to provide data to back up our tester’s testing quality (test cases run, bugs found, etc.) but there has to be a better way…right?
My favourite answer that came in:
I feel like because testers’ jobs are so varied and flexible, it’s hard to measure
What are your thoughts around this?
Is this something your company does?
In our company, tester performance (as well as dev performance, BA performance, PM performance etc.) is measured by peer feedback collected by our line managers. The only quantitative metric is a 0-3 scale, where 0 is ‘doing the bare minimum’ and 3 is ‘goes above and beyond’. The rest of our feedback is qualitative - what we’re doing well, what we could improve on, what we should/could be focusing on to develop our careers further. This feedback is used during annual pay reviews as well.
When going for a promotion we go through a similar process, but peers are asked to give feedback in terms of how we measure up to the job description of the role we are applying for.
The only other way that testers are measured is in terms of client satisfaction - if a client is happy with work delivered, then that means that they are happy with the performance of the team, including the tester(s).
This is something that is particularly close to my heart, as I’ve not long become head of the QA department here. I think it’s something that’s very hard to do. If one tester finds a load of defects, it doesn’t necessarily mean that they are better at finding them - it could just mean that their squad is better at introducing them. I know I’m preaching to the choir, but not all defects are equal. Also, as we work as a team, it’s everybody’s responsibility to find them. If you’re doing this for wage discussions, then I think just speaking to the squads they’re working with, see how their relationship with the devs and POs is, and whether they believe the tester is doing a good job. I think, being squad based, the QA team are dotted around the building, so it’s difficult to see them in action.
One thing I’ve thought of whilst writing this is based on the fact I’ve asked my team to let me know what areas of testing they’re interested in - for example, learning more about automated scripts, or improving coding ability etc. I may suggest that their performance is partly judged on whether they’ve worked on learning those skills. The problem with this is that external factors might mean that time outside of work is limited - people’s circumstances vary, and this should be taken into account.
I’ve spent the last 5.5 years as the only tester in my organization, which makes it… interesting. Over time, my appraisals have evolved towards a combination of analyzing bugs introduced to production with a goal of reducing the number of preventable ones (and that’s one seriously squishy metric), and how well I work with the development team.
I’m not sure if it helps or not that I do a lot of “non-tester” work: I maintain a modest set of testbed virtual machines, manage our TFS server (mainly because I was the one who set it up as a test bed and the team migrated to it), and pretty much any other odd job that I saw a need for so I started doing it.
It works, sort of.
Personally I think the best way to determine if a software tester is good at the job is old fashioned observation mixed with knowing what’s going on. That will be different for every testing position, and quite possibly for every tester.
I think every business is different with different needs that change. I think the best we can do in our roles is to communicate the value that we are offering and continuing to bring. In the way that we can do that. I have found value in at least sharing out what as a group is valued and how that is demonstrated. I wrote up more details on my blog on using rubrics. This is not tied to a performance evaluation. I also like this because I can point out to new hires what exactly we expect and seek.
For me this comes down to being an involved participant in the development life cycle. All testers hear about at the end of the day are all of the bugs they missed (whether or not it was actually possible for them to find the issue in the first place, based on environment/lack of certain pieces of hardware). We need to be integrated into the development process so that the entire dev team sees all of the value you are bringing to the team. While my boss continually asks about bug counts, I think a better indicator of tester success is the culture of quality embedded in the software team the tester is a part of and the overall approach the team takes to testing. If your tester isn’t willing to become an involved participant in the entire dev process, they probably are in the field for the wrong reasons.
Measuring individual performance, not just that of testers but of any role, is extraordinarily hard and fraught with peril to badly distort things. Bad actors will always game the system–“Look at the 1,253 bugs I filed on improper punctuation!”. This is part of why over the years I’ve come to really dislike (OK, outright hate) using metrics like number of test cases written/executed, number of bugs filed, etc.
I’ve long tried to use “up one level” metrics as part of any review or performance process. By that I mean looking to how the individual helps the team succeed, or even better yet how the organization is better.
Some examples I’ve used include number of support tickets filed 5/10/30 days after a release, number of renewed licenses, etc.
Those things are HARD to measure. Which is why a lot of organizations simply cop out and take the really awful approach of using bad measurements instead.
As testers we need to be part of changing this! We should raise the bar about the value we truly provide to the organizations.
I definitely agree with these points. I’d also highlight that there’s a fine balancing act between individual vs. the “up one level” metrics. If you don’t have enough insights into individual performance, it’s really easy for someone to skate along as the “number of support tickets filed 5/10/30 days after a release” decrease, so everyone gets rewarded.
A few other points/thoughts:
objective metrics are very easy to game and often incentivize the wrong things
subjective metrics require managers who are truly engaged, as well as encourage office politics
the flip side of performance measurement is goal setting
As an individual contributor, I’ve pretty much given up on objective measures of what I do, and cross my fingers and hope that I’ve got strong management that recognizes my value. Not a great answer for OP, nor does it highlight how to encourage lower performers to improve.
“As an individual contributor, I’ve pretty much given up on objective measures of what I do, and cross my fingers and hope that I’ve got strong management that recognizes my value.”
I resonate so hard with this. I’ve really stopped caring about bug counts and have started caring more about the perceived quality of my products. I hope that the numbers are indicative of the hard work the dev team and I are doing, but if they don’t I’ve become so integrated in the team it’s clear that I’m providing value. I’ve found that “making friends” with your dev team is imperative to success.
I couldn’t agree more @mrecord21! A collaborative approach to quality - making quality a team sport - helps everyone and makes for a better products. While I or any other person on my team, regardless of their role, can provide work products to their manager demonstrating “good at their job”, an ability to work with others is a primary contributor to product success.
In my opinion, there is a constant opportunity for testers, test leads, and those providing test engineering services (automation, testability assessments, etc.) to lead a project team towards quality product construction well before the first line of code and certainly before the first test plan is created.
It may have already been said, but two good measures of tester performance are development feedback and productivity.
Productivity is the obvious point of interest. Is the tester doing their job, getting it done on time, and what types of issues is this person finding. Any tester with some training will be able to write a test case/user story, test, and find bugs. A good tester will be able to create test cases/user stories that cover what needs to be covered and more. Their stories will be clear and understandable to both testers and developers. A good tester will also pay attention to the little stuff that needs to be address as well as the big issues. A lot of little things add up to big things.
The second measure is development feedback. Is the tester finding the less obvious bugs and asking questions? Developers are busy people too. However, they are generally more than willing to spend 5 minutes explaining why a bug happened if they know that the tester they are talking to will take the initiative to see if the same thing is happening in other areas of the software that work the same way. They are happier to know that one fix bug needs to be applied to three other areas of the software now, rather than getting three additional bug over the course of the next few days or week. It’s cheaper and easier to make one universal fix now than three individual fixes later.
There are lot’s of things which can help software testing companies to determine their software testers…
But I think there are some few skills which makes them good testers or you can also called unique tester… 1.Creativity 2.Skeptical 3.Perspective vision 4.Stubbornness communication skills
Software testing is one of the most important aspect in the IT company. testing means to verify or test the product or application as a end user prospective
Tester profile play a very important in this, As they test the product and conforms that it is deliverable to client
During the testing process they verify the serious type of testing to make sure that functionality is working properly, found the bug and retest that bugs is fixed properly They also make sure that we are delivery to client according to hie or her requirement
Software testing helps to make the improve the product quality, fulfill the requirement and other important aspects to.there are many test automation services are used by the IT company to test the products or automate the script in a project using automation tools
The one way is the quarterly rating system which is used by the company to verify the important of the software testing
The other way can be done in terms of client satisfaction- if the client with the work which is delivered to them it means they are happy the team performance.