Assessing The Performance of Your Testing and QA Team

Good Morning MOT,

First time poster and looking to interact more within the Testing Community. As an employee i have yearly performance reviews and have been tasked to assess the performance of my team in a quantitative and qualitative way. I believe i have a handle on the qualitative side as i will assess my team against the recommended test practices that have been agreed for the company.

However quantitative feels very subjective and difficult to categorise. My line manager suggested number of test cases written per hour as a starter but as we know its not as simplistic as that! So my question to the community is, have you ever had to measure yourself or your team in a quantitative way and if so how?

Many Thanks
Ricky

7 Likes

Welcome Ricky!

Red flags on the test cases per hour measurement, that would encourage an increase in poor quality tests imo… I’ve managed teams and done performance reviews for years, I’ve never had specific quantitative metrics, but may look over some of their input into projects they have worked on… looked at some of the tests they’d writted, bugs they’d raised and the overall performance of the project once live. We do track things like ā€œdefect leakage into productionā€ but not down to specific people. I know that doesn’t help from a quantitative side, but i don’t like boxing my team around a metric etc

2 Likes

Hi @ricky , welcome to the community! Thanks for sharing your context and for posting your question.

There’s a good thread here about release metrics & indicators which might trigger some ideas.

Also, is the following talk on your radar? @jennydoesthings shares so much incredible insight and reflects on why we track things – and some of the things to look out for in doing so: The Only Good Quality Metric is Morale.

Good luck exploring this topic and thanks again for joining the Club to post here today. :grinning:

1 Like

Howdy @ricky!

I would lean in to measuring what matters. For my org, we have project deadlines, and tracking those from a high level is one way we measure this (though it’s more of a team metric). A few other things that fall in the lines of quantitative metrics that could be useful:

  • Topics presented on at team meetings (give a platform for testers to share what they are learning to make the team better). This could be sharing a tool, or process with the team. Something like this could be under the goal of ā€˜Continuous Improvement’.
  • Are there any tools that your teams build/contribute to? A tool speed up testing by generating data or putting the system in a specific state.
  • Adding Automation checks, Performance checks, can be helpful to show, but not useful to compare team members against. For my teams I expect to see progress quarter to quarter/year to year vs stack ranking based on amount contributed.
  • If the team members are involved in releasing code or post deployment testing, keeping track of deployments can be useful to paint a picture of the value the team member brings.
  • If there are specific roles that are shared or rotate between the team (scrum master, release coordinator, etc) noting how many weeks/sprints they served in that role can be useful.

Hope that helps!

1 Like

Quantitative measures of business and performance impact are dangerous IMHO. It can lead to gamification, which can be fun for less serious things, can be damaging to morale and wellbeing for individuals and teams. It basically says that your only worth is as someone to churn out code/tests, not the quality of that work.

MS reviews/connect look at the core business deliverables and priorities. Diversity and Inclusion is one of the core priorities for example. And your impact on delivering to those core priorities will be measured. So, when I came to do mine, they had to be rewritten not just for my personal goals and ambitions, but also the team, division and company priorities. Negotiated and agreed with my line manager.

I’ve never sought to measure myself in the way that you have described, and would always push back on it now, and for all of my peers and colleagues. It’s unhelpful and unhealthy.

1 Like

I asked a similar question a while back, about metrics for testers as I was trying to come up with SMART goals.

While my company has fully embraced goal setting, I’ve found it really hard to do as an individual, especially as you allude to, coming up with something measurable, and especially something quantitative.

The more I thought about my goals, it became obvious that all the things I’m being measured on as an employee are more reflective of my team, and especially if you’re doing agile/embedded, you’re not necessarily specialized and you’re ideally doing what the team needs, which may or may not align with a goal you came up with months ago.

My manager agreed, so we mainly do pro forma usage of the goal tracking system. If the team is struggling, we try to figure it out as a team, and try to improve in that regard mainly via retros. That’s not to say everything is rainbows and unicorns (we had a prod incident last week that’s definitely got our morale down, and trying to sort out some of the aftermath is going to be a slog), but really, we’re learning from it and trying to make our product and processes better.

1 Like