Is Product Quality Going Up Or Down?

Does anyone else attempt to answer this question? What do you measure and how?

I’m a new QA Manager of an ecommerce product. My boss has challenged me with determining if the quality of the product-under-test is going up or down (in production) per quarter. I’m a metric hater and really struggling with how to answer this reasonable question.

  • My DevOps intuitions take me to the four DORA metrics, but those are more about throughput and stability than product quality.

  • My other fallback is bugs found in production (escapes). But I kind of hate that one too b/c as the product grows offering more value (quality), that growing value-to-users may outpace increasing escape amounts.

  • This is a global customer facing ecommerce site so user feedback seems like a good quality measure. I’m not sure if we collect it. Surely we must. I’ll ask. Active users per time period. I can get that. Will these two user-facing measures suffice?

What am I missing?

3 Likes

How about asking the testers for their assessment? That’s their job.
Especial the know unknowns and unknown unknowns.
Take same inspirations from the Low Tech Testing Dashboard:

If you cannot trust your testers you have another problem.

Quality is a value to a person. Which is basically an emotional thing.
Metrics might help here, but their are only a tool.
Ask you boss what quality (values) he would be interested in.

4 Likes

Mandatory “it depends on the content”.

But I like your thoughts on quantitative (DORA) and qualitative (feedback).

Figuring out what good quality means might need lots of people’s inputs. If its not already been defined/ thought about.

I imagine something like completed transactions would be interesting

1 Like

“Quality is value to some person (who matters)” - Jerry Weinberg’s quote adapted by Michael Bolton & James Bach.

Who are your stakeholders? Who is asking about quality? What is it to them? This question is one of the most important starters when you get asked to talk about quality. I’ve added some question suggestions to define on what quality is to you in my article what is quality.

I love metrics, but I hate the misleading, confusing, demotivating metrics. There’s a lot of those, unfortunately. Like the bug number. It can be rigged. How motivating it is as well? If I don’t report anything, does it make the product better quality? Not really.

I am a big fan of team productivity metrics like Accelerate’s 4 key metrics. As well as using monitoring for the agreed upon success KPIs/North Star metrics by the company. Working closely with product on defining those metrics can be super motivating for the whole company.

For example, you could have a new feature that is supposed to increase the revenue (simplified example, likely it won’t increase the revenue directly, but should increase engagement which also will increase the revenue). So, let’s measure once we release it. If it turned out to be true - it’s a success.

4 Likes

Going on pure sales figures is still a decent measure, but has it’s problems as not being a honest metric.
My “anti rose-tinted-spectacles” take however, is that any attempt to measure merely changes the thing being measured. At the end of the day designing a product so that it becomes consumed like a SaaS. I also worry that it might even happen that you do all of the righty things, and a competitor comes along with a better advertising campaign and kills the company.

So many things change in our industry so often, than nothing technical more than “is it making money still?” makes sense. Obviously security risks are the ONE thing, and no amount of KPI or North Star will cover you at all. At the end of the day the business should know if they are happy with quality based on their specific risk profile and needs in the current climate.

2 Likes

@flynnbops , @sebastian_solidwork , @conrad.connected , @linazu ,

Thank you so much for your thoughtful answers. I’m sorry I didn’t read them until now. I need to do a better job getting my head out of the grind and engaging with all of you more, here.

I used Bach’s Low Tech Dashboard back in 2007! Maybe I will try that again.

And your other responses made me realize yeah, quality is probably not the right thing to measure. It’s the more tangible things like transactions. Still, it is fun to try and play the game…try and compile measures that can reflect a pedestrian definition of quality.

3 Likes

No problem at all! Have you tried many ideas out since we last heard from you?

1 Like

Trying things or “Doing experiments” is probably one of the best things we can do, a bit like the Hawthorne effect Hawthorne effect - Wikipedia (aka lightbulb effect.) By shining a light on various areas without fatiguing everyone, you can start to gain insights through carefully intentioned experiments. I for one find this obsession with finding the one metric a waste.

For example:

"Work done" by engineering is often counted as tickets, or planned in agile sprints as points, Teams then use the points value to talk about how much work they did. That we know is completely untrue, because points estimations are really a narrative used to help organize work in ways that a team can efficiently complete it quickly. Our world is full of false metrics, that are still useful.

A shift towards quality conversations is much more useful, and while metrics might give you a handle on preventing quality slip-ups, it’s more important to be holistic. Your quality metrics mean little if someone discovers a security or any kind of avoidable minor defect and you don’t have team lined up nicely to be able to quickly roll out a patch. Try measure quality in terms of time-to-patch?