Metrics worth capturing?

Hey all, this is a question I know that has been asked before, and I know is a contentious issue that some disagree with. But…

What metrics do you think are useful to measure the features of your testing?

I’m not looking from the view of senior stakeholders, but from the view of perhaps a principal engineer/test lead (or similar role who is accountable for the testing strategy of a company/division/function). This is something I’ve been thinking about for a while. I think what helps perhaps, is to tie it into a mission statement from testing. So for an example, my mission statement would be something like:

Continually deliver working software that meets customers needs, by focusing on preventing defects and providing fast feedback.

On the back of this, a couple of metrics that come to mind for me, are:

  • 10 day rolling bug/live issue count (plus which way this is trending) - This for me is the ultimate metric that defines whether testing is meeting its purpose or not.
  • System uptime
  • Build times (measuring how quickly we can give that feedback as per the mission statement)
  • Story run time (How long does a story take to be developed from inception to deployment - the goal of testing is to catch bugs, but a feature of that is how quickly we can do it. If it takes 2 years, theres perhaps a problem here, if you could see this across teams, it might help to see a bottleneck)
  • % of broken builds - (hoping to visualise here how efficient our build process is, are there flaky tests, are there common failures and if so could they be prevented)
  • Mean time to resolution - (perhaps a measure of the average time a build is broken for, trying to improve the efficiency of diagnosis of issues, if it takes 40 mins to decipher a test report and figure out why they failed; that’s perhaps an issue)

I’d be really keen to know what other people are capturing and how that helps them aid their testing. I don’t want to just capture metrics for the sake of it, or to report to management. I want to drive improvements going forward. I think that’s perhaps why its important to distinguish between the goal of testing (defect prevention) and the features of it (how quickly we can do it, how much feedback we can provide to developers)

Love to hear your thoughts and opinions.

4 Likes

Hi Tom, thats good to hear.

I did the same and put some initial KPIs together. First and foremost be prepared for them to evolve and continue to question them. Keep asking the questions of your metrics “What is this telling me ?”, “What is the outcome if I improve the performance?”, “Can I set a target for this metric?”. I’ve created metrics that I think are interesting, but when I challenge them with those questions, they fall flat. Some of the stats I’ve dropped for that reason.

I started simply with bugs as thats the easiest data to get at. The questions that drive the metrics is “Are we reducing bugs found in UAT/Production ?”, “Are we managing our bug backlog?”. I pushed a number of metrics covering aspects of those questions.

For testing the next simplest metrics is activity based. Its quantative not qualitative, but it gives a guide to the question “Are we managing the workload?”. “Are we increasing/maintaining the % automated tests in our regression packs?”, “How much roll over is occurring in sprints and how many of those tickets are rolling over assigned to test?”. From this it can help identify any resourcing or process issues that require scrutiny.

Thats just an insight into how I determine what metrics to choose but I’m continuously reviewing their value. I know I’m weak currently on a deeper dive into our code/build management but my first step is to be clear what questions I want to ask. Hope that helps.

3 Likes

Hello @tommcc89 and Welcome!

I like the mission statement. I felt it could apply to the entire product development team rather than just the Testing team. I believe “focusing on preventing defects and providing fast feedback” is a proactive approach to product development and helps establish a collaborative mindset. In that manner, the Testing team become Quality Advocates. In practice, this might mean the Testing team is recommending unit tests, and testability practices or improvements.

On the question of measurement, I was surprised to see bug tracking associated with “defines whether testing is meeting its purpose or not.” It seemed to express an orthogonal view of the mission statement which, I thought, spoke to the larger role of testing in product development.
While finding bugs is part of testing, the Testing team can assist in all phases of the development: Question Askers when reviewing requirements, Quality Advocates during product design, and Quality Accomplices during product development.

Counting bugs has long been discouraged because it is easily gamed and sometimes misused. Bugs are a part of learning and should be welcomed. If they must be measured, perhaps they could be presented as a rate (e.g., bugs/week, bugs/sprint). Even that could be problematic.

I like the remaining metrics and believe they could provide indications of how the product team (not just Testers) behaves.

Joe

3 Likes

Kudos, you did the right thing by coming up with what you want to accomplish upfront! Every word of that statement drives what metrics you would choose, and how you would operationalize it. Here are some metrics you can think of:

Continually deliver: How many meaningful features you would deliver to the customers which are of value to them, in a specified period of time (two weeks, a month, a quarter)

Working Software that meets customers needs: How do you know if the software is ‘working’? You ask the customer. Setup feedback loops. You find out how many deliveries out of the lot were perceived as ‘working’ by the customer

Preventing defects: How do you ‘prevent’ defects? By proactively engaging in all stages of product development and ‘Test’ at all stages. Metrics can reflect in which stage a defect could have been detected, and how to find a problem as early as possible.

1 Like