How Do you Measure Quality and Quantify it?

I was currently tasked with. “How do we measure quality” “What metrics would we use?”

I am very new to being a tester and IT in general. So I am actually struggling to come up with ideas on this. So I figured I’d ask the collective brains of the Ministry!

How do you measure Quality? How do you quantify it for the business to help make that shift towards a test first mentality?


Ah, the eternal question - it very very often be answered one way. With a question.

Who, does it matter to?

That’s the tricky part. I feel as though they are looking for an overarching “This is how we measure quality” so the stake holders begin to understand that we’re wasting time by finishing it as quickly as possible and worry about the bugs later, since it takes way longer to test in the first place than fix after it comes crashing down (which doesn’t make sense).

So it’s for us to explain to the business, and stake holders, in business terms, that by being quality focused first, we can avoid a lot the tech debt and issues that always happen.

The company has recently transitioned into being Agile and Cross Functional Teams. So it’s a struggle to show and prove that building with a testing mentality can save a lot of time and money and build higher quality products.


The short answer is: You can’t. Quality is an abstract concept and therefore subjective and relative. It has no objective, independent measurement or meaningful unit of measurement on a scale. Quality can only be assessed or evaluated.

The slightly long answer is: Quality is best defined as value to some person, making quality a relationship. Value can be measured in terms of cost (what someone is willing to pay for that quality) or time (how long someone is willing to use or subscribe to a software product) or star/review ratings (like 4/5 stars or 80% score) but these (and potentially others) are “surrogate” measures. By that I mean they don’t directly measure quality, but can work as “good enough” substitutes in certain circumstances.

Surrogate measures depend on what problem you’re trying to solve. When you say “shift towards a test first mentality”, what do you mean exactly? :slightly_smiling_face:


How “we” measure quality is entirely dependent on our “version” of success. Lets not got lost in metrics, if the product is failing to sell; but is bugfree ™, our problem is either the business idea, the business climate or the sales people, perhaps even the product image.

Yesterday we had a meeting with some 2022 plans and sales people showing graphs, the graphs show that people who want to buy the product show interest, but depending on many factors don’t end up buying. my job as a tester is to ensure that things, like does the software runs correctly on their machine? If I don’t know what machine they are using, my 93% unit test coverage stats are pretty much pointless no? If they don’t speak English, what was the point of my testing functionality and translations are correct, when the UI is only intuitive to English speakers? All that is very extreme, but if the way you measure quality does not tie back to an actual reason customers keep buying the product, it’s probably going to be measuring the wrong thing.

But there are things you can measure, and failing to measure them and re-evaluate which ones most closely match your context, is itself a failure. Every product will exist in an ecosystem, base your quality metrics on very well measurable things, that matter to that ecosystem health.


For me, “capability” or “maturity” is a definite measure, and as such the ability to release more often and more frequently is not a bad metric to have in your arsenal. Especially when you are making team/methodology or process changes.


A tricky question! You could use Key Performance Indicators (KPIs) or Objective Key Results (OKRs) to help indicate how you’re doing on a quality front. We have a KPI around bugs being triaged within 14 days of them being raised and a OKR around ‘more proactive testing’ which has indicators such as ‘dev work is looked at by testers when in code review’, ‘Test plans are created and attached to a task before they go into test’ and ‘Other team members are empowered to test’. Quantifying these is hard but at least you get some indication of how you’re doing. Hope this helps :slight_smile:


It’s hard to explain, but our systems were designed to work but not be tested. Meaning it’s actually pretty difficult to test certain aspects of the company. So much so, that we had a major release and since test set-up took so long the business said “push it and forget testing it” now every week since there hasn’t been a week where this business changing product hasn’t gone done and broke.

So we’re trying to influence building products around testability and quality versus speed. I hope the example above helps understand the source of the question.

1 Like

I’d say you’re certainly doing the right thing by advocating for testability early in the SDLC.

“push it and forget testing it”
Ouch. Another way to read this is “push it and forget any risks to customers that we could’ve known about and possibly do something about before it was too late”.
Release times are important, but so long as the business knows the risk of pushing without any testing at all (as in, we won’t know anything about the product or what problems may lie), then you’ve done your job as a tester.

In terms of metrics, speed is easy to measure as the time taken to release, which could be measured in hours, days or weeks.

For quality of product, I’d just stick to testing it; uncover risk and threats to value and communicate these to stakeholders as information (not data). Actually say what’s wrong with the product rather than trying to consolidate this into some sort of numbers on a graph.

Same for testability too: assess, evaluate and describe how testability is improving over time, how easier it’s making your job testing it and how you’re becoming more happy with the outcome.

Managers love cheap, easy metrics unfortunately, however they don’t paint a good picture of quality, including things like testability. No wonder you’re struggling as I don’t think anyone in the world has come up with a reliable measurement for quality and I’m not sure anyone ever will. It’s much more effective to supply information in the form of a story - who may come to harm and to what cost?

Surrogate metrics can be used to support the story of quality in a way that might matter. The classic example is performance, where data on response times and load times can be used to support an assessment. Things get a bit more tricky when trying to identify data to support functionality or usability quality however. You might need to be more specific about which aspects of quality you want to measure.

Testability is a bit more specific so I can come up with a few possible examples, but you still need to identify what it is you want to measure exactly.
Do you want to measure how much testability is being advocated early in the SDLC? How much it’s being taken seriously by developers? How much it’s being implemented? How testable the product ends up being?
You could consider things such as: For every developer meeting to discuss requirements or designs, how many times was testability discussed? How many times wasn’t it discussed? How many times was a testability feature mentioned? How many times was it implemented vs ignored?
For testing in general: How many projects were shipped without testing at all? For how long was testing performed as a percentage of the entire project? How many times did development overrun their deadline into testing? Etc.
Again though, none of these metrics will give you a measurement of quality, but they may be enough to give you some sort of insight into how things are going that can be monitored over time.

Sorry for the long post, but there’s another (even longer) blog post by James Bach that you may be interested in (but well worth the read): Assess Quality, Don't Measure It - Satisfice, Inc.


That is a TON of information, and you gave me many things to really think through. Luckily my team is all about being an advocate for testing so luckily I am in a good support structure, just getting the business side to buy in is a tough one! I seriously appreciate the detailed response! I will for sure read that post you linked. Thank you so much!

1 Like

Backing up what @b.fellows said: perhaps the best way to sell testing to the business is to present them with a worst case scenario. Ask “What does an absence of quality look like?”

I once worked on a project which was hailed as “our best tested product ever!”, only to find that the testing had been based on an app with poor specifications, and no account given of the need to change the existing API in the business flow to accommodate the new data our app collected. We ended up with about £3 million of transactions stuck between invoicing and final payments.

As you might imagine, the business took a poor view of that. Their solution was the nuclear option of scrapping the entire in-house IT development team and buying in proprietary products instead.


I can also recommend the BBST Foundations lecture video on measurements as a very good place to learn about metrics:


Yes, Quality is something that cannot be measured
I would say for me quality is:

  1. The sense of responsibility and faith in me that I get when my product manager waits for me to give the green signal and ready for release after testing is immense.
  2. A smooth release and no issue found by the user.
  3. When the team is happy with the final product and recognizes my testing effort too.
  4. The satisfaction I get after a day if rigorous testing and there were no issues.
  5. When my smoke test shows a red cross or a green tick in the pipeline, that’s a Quality.
1 Like

I will say Sean, not everyone is that brave as to just right up say “this product is untestable”, or be honest and say we were forced to “just release and see what happens”. Normally this situation arises when a product has been grown over time and is no longer doing the same thing it did perhaps 10 years ago. Every time I’ve been in a similar situation there has always been more than one way out of the hell. Your constraints or your program’s environment has changed over time, and the product has not adapted to the pressures placed on it? I think a lot of focus on using metrics to do things like reducing bug counts is not going to save your project. And, we all know that tracking bug severity and even the kinds of defects, actually do not directly drive product quality. Even measuring average time to fix a bug, is a metric designed to tell you how good your programmers are, but not how easy it is to fix bugs. And that’s a much more important goal. Measuring quality is all about asking questions that tell you what you should do next, questions that cannot be answered if you don’t have good visibility. We often think that visibility is proportional to bug counts, but often by nature, the “unknowns” never shrinks if you are looking for bugs in the wrong places.
When you test, you need to be able to isolate side effects, and that requires being able to carve up the product, often very hard to do if a product is monolithic and does not give you “eyes” into the insides of it. Exposing inner workings of a monolith is one way that lets not just the testers, but also the programmers control state of a system. I’m not talking proper “api” here, even if an “internal api” breaks often, it creates a way to talk about not only how data moves about inside the application, but also about good architecture. I have found for example that contentious things like the “internal api” that’s too unstable to use for test automation, is a groundswell that can often lead to devs cleaning up the architecture sharpishly.

But, you may be in a situation where metrics that track incremental changes in the product are a distraction that stop people asking the right questions. Depending on your codebase and architecture you might be looking at time to do a rewrite. I’d be using a few questions in a retrospective, you do hold retrospectives with recorded actions, at least 12 times a year don’t you?

  1. Do the same actions start to come up in every retro meeting? Time to fire the manager.
  2. How long does it take to build “all” of the product code and deploy to an environment >1 hour? Time to rewrite your CI system , no seriously.
  3. Is your product monolithic, and thus takes ages to prepare and run any single test? Time to add “test hooks” and “configuration apis”.
  4. Is your codebase so old, that devs take ages to fix bugs in it? Time for a rewrite.

These are just 4 of the quality measurements that might get you asking the right kinds of quality questions, pretty sure there are many more out there. It’s OK to feel a bit like a rebel, even to feel like the boy pointing out “hey the king is walking down the road with no clothes on!” at times. Managers want solutions, not problems, if a metric is not giving any useable answers, it’s not a useful one, toss it quickly.

I measure quality by looking at how often we are asking and answering the really ugly questions.

1 Like

This is a recurring question (you’ll probably find many blog posts about it).
In simple words, quality is value to someone, who matters, in a given context, at a given time.
@conrad.braam mentioned “success” - I like that idea… how does your team define “success”? Maybe having a discussion/brainstorming about that can give you implicit answers about the meaning of quality in your case, and what matters to whom.
Note that everything is connected, and metrics can be tricky. If you focus on one, than you can probably easily navigate it and forget about all other things. There are the DORA metrics (may be hard to implement but I think they’re interesting and useful and balanced).
What I’ve seen and experienced is that many times we can focus on almost meaningless metrics.
Imagine: “number of bugs”. Well, by itself it doesn’t tell how relevant and impactful those bugs may be. Also, if we just focus on that, then we could be harming/forgetting things such as code maintainability and other code quality metrics.
Of course we can also focus on surrogate metrics, like customer reviews, number of customer tickets. But that is like a birdseye perspective, and quality has so many different perspectives, like a prism. Besides we can look at quality at different levels.
My overall suggestion would be for not you to define those but as a team have a conversation about what quality means, for each one and for the whole team, and for the customer. Understand where you are and where you wanna be and then define some measurement points (I dont want to call them metrics) that you can use as a baseline, so you then know if you’re improving them or not really.
Some time ago I worked with a bunch of teams and asked about their definition of quality, and I came with slightly different answers (Some perspectives on quality | Sergio Freire) .
I’ve been evolving my view on quality, and nowadays I also connect it with one key aspect in my perspective: confidence.
Quality is affected by what is value, and many things contribute to that (The Quality Ice Cream Truck | Sergio Freire)
I have made a more recent drawing around quality, teams, I will probably write a blog post and share it later on.
Hope some of these thoughts help? :slight_smile:

1 Like


I don’t think there is any requirement to quantify measured quality. When it comes to digital solutions, the end goals are all about developing solutions that meet the predefined requirements, work flawlessly, and help yield satisfied customers.

Since you are a newbie to the industry, I would recommend you to keep things easy and avoid any unnecessary stress to your learning curve. After all, QA is all about keeping things easy and free of complications.

It is really not extreme and translates exactly to some of the products I’ve been working on.