How does your company determine if a software tester is good at his or her job?


(Rosie) #1

This was asked in the MoT Slack, but I think it’s worth bringing it out here to The Club.

At my company we are constantly being asked to provide data to back up our tester’s testing quality (test cases run, bugs found, etc.) but there has to be a better way…right?

My favourite answer that came in:

I feel like because testers’ jobs are so varied and flexible, it’s hard to measure

What are your thoughts around this?
Is this something your company does?


(James Sheasby Thomas) #2

In our company, tester performance (as well as dev performance, BA performance, PM performance etc.) is measured by peer feedback collected by our line managers. The only quantitative metric is a 0-3 scale, where 0 is ‘doing the bare minimum’ and 3 is ‘goes above and beyond’. The rest of our feedback is qualitative - what we’re doing well, what we could improve on, what we should/could be focusing on to develop our careers further. This feedback is used during annual pay reviews as well.

When going for a promotion we go through a similar process, but peers are asked to give feedback in terms of how we measure up to the job description of the role we are applying for.

The only other way that testers are measured is in terms of client satisfaction - if a client is happy with work delivered, then that means that they are happy with the performance of the team, including the tester(s).


(christian dabnor) #3

This is something that is particularly close to my heart, as I’ve not long become head of the QA department here. I think it’s something that’s very hard to do. If one tester finds a load of defects, it doesn’t necessarily mean that they are better at finding them - it could just mean that their squad is better at introducing them. I know I’m preaching to the choir, but not all defects are equal. Also, as we work as a team, it’s everybody’s responsibility to find them. If you’re doing this for wage discussions, then I think just speaking to the squads they’re working with, see how their relationship with the devs and POs is, and whether they believe the tester is doing a good job. I think, being squad based, the QA team are dotted around the building, so it’s difficult to see them in action.

One thing I’ve thought of whilst writing this is based on the fact I’ve asked my team to let me know what areas of testing they’re interested in - for example, learning more about automated scripts, or improving coding ability etc. I may suggest that their performance is partly judged on whether they’ve worked on learning those skills. The problem with this is that external factors might mean that time outside of work is limited - people’s circumstances vary, and this should be taken into account.


(Kate) #4

I’ve spent the last 5.5 years as the only tester in my organization, which makes it… interesting. Over time, my appraisals have evolved towards a combination of analyzing bugs introduced to production with a goal of reducing the number of preventable ones (and that’s one seriously squishy metric), and how well I work with the development team.

I’m not sure if it helps or not that I do a lot of “non-tester” work: I maintain a modest set of testbed virtual machines, manage our TFS server (mainly because I was the one who set it up as a test bed and the team migrated to it), and pretty much any other odd job that I saw a need for so I started doing it.

It works, sort of.

Personally I think the best way to determine if a software tester is good at the job is old fashioned observation mixed with knowing what’s going on. That will be different for every testing position, and quite possibly for every tester.


(Alan) #5

I think every business is different with different needs that change. I think the best we can do in our roles is to communicate the value that we are offering and continuing to bring. In the way that we can do that. I have found value in at least sharing out what as a group is valued and how that is demonstrated. I wrote up more details on my blog on using rubrics. This is not tied to a performance evaluation. I also like this because I can point out to new hires what exactly we expect and seek.


(christian dabnor) #6

A good read, cheers. Just had a read of that and passed it on to our HR department to have a look at.


(Matthew Record) #7

For me this comes down to being an involved participant in the development life cycle. All testers hear about at the end of the day are all of the bugs they missed (whether or not it was actually possible for them to find the issue in the first place, based on environment/lack of certain pieces of hardware). We need to be integrated into the development process so that the entire dev team sees all of the value you are bringing to the team. While my boss continually asks about bug counts, I think a better indicator of tester success is the culture of quality embedded in the software team the tester is a part of and the overall approach the team takes to testing. If your tester isn’t willing to become an involved participant in the entire dev process, they probably are in the field for the wrong reasons.


(Rosie) #8

(Alan - I added your blog to our Testing Feeds)


(Jim) #9

Measuring individual performance, not just that of testers but of any role, is extraordinarily hard and fraught with peril to badly distort things. Bad actors will always game the system–“Look at the 1,253 bugs I filed on improper punctuation!”. This is part of why over the years I’ve come to really dislike (OK, outright hate) using metrics like number of test cases written/executed, number of bugs filed, etc.

I’ve long tried to use “up one level” metrics as part of any review or performance process. By that I mean looking to how the individual helps the team succeed, or even better yet how the organization is better.

Some examples I’ve used include number of support tickets filed 5/10/30 days after a release, number of renewed licenses, etc.

Those things are HARD to measure. Which is why a lot of organizations simply cop out and take the really awful approach of using bad measurements instead. :frowning:

As testers we need to be part of changing this! We should raise the bar about the value we truly provide to the organizations.

Rant over.


(christian dabnor) #10

Yes! Absolutely agree with this! More and more I would like my team’s role to be one of coaching and questioning rather than manually testing.


(ernie) #11

I definitely agree with these points. I’d also highlight that there’s a fine balancing act between individual vs. the “up one level” metrics. If you don’t have enough insights into individual performance, it’s really easy for someone to skate along as the “number of support tickets filed 5/10/30 days after a release” decrease, so everyone gets rewarded.

A few other points/thoughts:

  • objective metrics are very easy to game and often incentivize the wrong things
  • subjective metrics require managers who are truly engaged, as well as encourage office politics
  • the flip side of performance measurement is goal setting

As an individual contributor, I’ve pretty much given up on objective measures of what I do, and cross my fingers and hope that I’ve got strong management that recognizes my value. Not a great answer for OP, nor does it highlight how to encourage lower performers to improve.


(Matthew Record) #12

“As an individual contributor, I’ve pretty much given up on objective measures of what I do, and cross my fingers and hope that I’ve got strong management that recognizes my value.”

I resonate so hard with this. I’ve really stopped caring about bug counts and have started caring more about the perceived quality of my products. I hope that the numbers are indicative of the hard work the dev team and I are doing, but if they don’t I’ve become so integrated in the team it’s clear that I’m providing value. I’ve found that “making friends” with your dev team is imperative to success.


(Joe) #13

I couldn’t agree more @mrecord21! A collaborative approach to quality - making quality a team sport - helps everyone and makes for a better products. While I or any other person on my team, regardless of their role, can provide work products to their manager demonstrating “good at their job”, an ability to work with others is a primary contributor to product success.

In my opinion, there is a constant opportunity for testers, test leads, and those providing test engineering services (automation, testability assessments, etc.) to lead a project team towards quality product construction well before the first line of code and certainly before the first test plan is created.

Joe