How do you set measurable goals in QA when quality is hard to quantify?

When it comes to quantification techniques, there’s a lot talked about code quality, but a lot less about overall customer quality, other than feedback surveys, and who likes doing those?

I learnt about Evo (Evolutionary Value Delivery) years ago as a way to quantify quality. Some say it is one of the original Agile methodologies. I’ve seen it used as a way to translate business language and requirements into measurable outcomes.

Example:
The sign-in process should be ‘easy’ and ‘intuitive’.
How do you measure ‘easy’ or ‘intuitive’?
Put measurable values on them. E.g. It will be very easy and intuitive if someone who has never seen the software can sign in, in under 15 seconds. Easy if under 30 seconds. Too hard if more than 30 seconds.
Now it is measurable.

This can be applied in many ways and places.

Thank you, this is really helpful.

Yes I agree, ticket counts or bugs found doesn’t work as a meaningful measure of performance. I like the idea of improving the team’s ability to test with confidence, it feels like a much more valuable direction, your examples are giving me some good ideas for setting goals.

I’m aiming to move further into automation, so I’m sure I’ll be referring back to these suggestions as I progress. At my company we still follow a fairly traditional waterfall process, which works for the size of our projects, though I’ve wondered whether adopting some agile practices might help. Either way, improving the speed and quality of feedback to developers seems like a useful measurable goal. Better documentation is a great idea, with the opportunity of using LLMs in the future, having solid documentation seems like a must.

I’ll look into Tuskr and Qase then, I’m not using a test management tool at the moment, it’s definitely something I need to explore. I imagine I’ll be posting more questions about that at some point.

I really like the idea of framing goals as user stories. It’s so easy to lose sight of why a goal exists, so focusing on the story first and then deciding how I’ll know I’ve achieved it, feels like a great way to stay aligned with what actually matters.

Thank you as well for the reminder that quality is much broader than automation. I agree. Some of the most valuable contributions I make come from noticing an angle we haven’t explored yet, or drawing on past experience of what’s gone wrong before. That instinctive side of testing is still important, even as I’m learning the value of automation.

I really appreciate you taking the time to share this. It’s given me a much clearer sense of direction for shaping my goals.

have you got any company org goals like

99% up time? customer satisfaction. reduced escaped bugs or anything like that. I would zoom out and look at what the company is trying to achieve.

2 Likes

Thank you for pointing me toward Capers Jones. It’s useful for me to broaden my awareness of what’s out there, and I’ll keep your perspective on the limitations of his approach in mind as I read.

I also appreciate the insight into context‑driven testing. One of the goals I’d drafted was to learn about “best practices”, but you’ve made me realise it’s more valuable to focus on which practices actually work best for our context.

The links and references you shared are genuinely interesting, there’s a lot to explore. I’ll definitely be digging deeper into session‑based testing and the RST methodology.

To avoid any misunderstanding, I mentioned Capers Jones as the perfect case study of what you should NOT do. Anyone with a reasonable understanding of mathematics and statistics will be able to confirm the absurdity of his work for themselves.

One of his favourites is to produce pages of calculations that don’t match his measurements. He then “fixes” this by arbitrarily applying a fudge factor on the basis that this is what you need to do in this particular context. However, the fudge factor is magicked out of nowhere and is different for every context. It’s obvious that the calculations are simply wrong and have no predictive capability, but Jones refuses to acknowledge this.

He has also stated that his methodology assumes that the organisation is CMMI (Capability Maturity Model Integration) level 5, which almost no organisations are. This is just deflection, presumably to discredit anyone who doesn’t work for such an organisation. However, it’s irrelevant because his maths is wrong because it’s based on assumptions that are not true in any context.