Metrics for Testers?

tl;dr; Does anyone have any ideas for metrics for testers?

So, I’ve been wracking my brain and doing all kinds of searching online to try and come up with some goals for the next quarter. My company is making a push to have scorecards for employees, and pushing the idea of SMART goals so I’m trying to think of measurable goals.

I like Whittaker’s take on this: “measure how much better a tester has made the developers on your team”, but the real question then is how do you measure that?

For context, I’m an embedded SDET, in a reasonably agile organization - I participate in most steps in the process outside actual implementation - requirements, design, code review, functional testing, etc.

I’m not looking for actual goals, but wanting to start with the measurable part of SMART. I started a mindmap on this, and started with the low hanging fruit of things like bugs written, stories verified, etc. I don’t like these as they’re too game-able and don’t really tie to the idea of making the developers better. And that’s when I ran into a wall and couldn’t come up with other ideas . . . does anyone have any ideas for metrics for testers?

p.s. maybe there should be a “Professional Development” category?

In the context of SMART goals, Measurable and Metrics are two different beasts.

For example, “Create a test strategy document” is measurable. The existence of a document is the measurement. It can also be clarified by “Use the document to plan the tests” or “Create test sessions based on the strategy.” It can be supported by “Have the team review the document.” or even “Plan a meeting to discuss the test strategy with our team.” So far as I know, there are no reasonable metrics to making a strategy. There are things you can measure (completeness, follows a template, page count, word count, etc), but nothing that I can think of on the top of my head which would say “This is a good test strategy document.”

On the other hand, your “low hanging fruit”, such as bugs written, stories verified, etc. are in fact, measurable, but are they Reasonable or Realistic? (The “R” is often defined as one of those)

Anyhow, in my current organization, we have “Realistic” goals such as “Cover 70% of the test cases with automated tests (The T is “Before the release is ready for test”)” and “Run the Acceptance test suite in less than 8 hours” (In this case, 8 hours is a 16 hour improvement from a year ago, and is not very deep testing)


Hello Ernie!

I wonder the same as @brian_seg about the low hanging fruit. “Stories verified” is a measure of something but I’m not sure of the value. The measurement of bugs or defects has long been discounted and even discouraged since it is easy to game, does not always result in expected behaviors, and is, in my opinion, a poor measure of testing or development.

I recently started reading How to Measure Anything (Hubbard) and believe (after just a few chapters) that measuring testers is possible. However, it depends on how the measurement is used. Perhaps you might start by asking about the decisions or behaviors that could be impacted from a measurement of testers or developers or testing or developing. That is, what is the value of the information you collect through the measurement, and what decisions might be easier when you have the information you need?



So far as I know, there are no reasonable metrics to making a strategy.

@brian_seg totally agree with you that you can’t measure the strategy itself directly, but I think you could easily have metrics around it’s use/adoption, i.e. % of test strategy documents for new features, or as a goal 70% of epics/new features/whatever will have meetings to define test strategy. Neither of these seem great for me in my current org (the goals would be goals to have goals, not to work towards improving a pain point or to make me a better tester in general), but I like the discussion.

@devtotest Sure, like I said, I don’t like the low hanging fruit because “they’re too game-able and don’t really tie to the idea of making the developers better” . . . like you both note, they’re easy to measure, but they’re not useful measurements, which is why I’m crowdsourcing my brainstorming. I do think we (including Hubbard) are on the same page, that we’re thinking about our end goals so that we can ask the right questions/measure the right things.

For reference, I also cross-posted this question to Reddit, where it’s gotten a bit of traction: