The Monetary Cost of Poor Quality?

Measuring and reporting on quality can be difficult. It can be especially difficult to advocate for a change to stakeholders that you know will improve the quality of a product.

I remember advocating for security testing of a product several years ago. I was turned down time and time again even though I knew it was important. I reported on the data that could be taken, what that would mean for the users privacy, the ethics but it didn’t seem to work :disappointed:

Then one day a HUGE security breach was reported on in our industry and in a similar realm to where our product would be based. The news articles reported on the millions and billions it would cost the company. Suddenly everyone was interested in security testing :sweat_smile:

Have you ever been faced with a situation where you had to quantify the monetary cost of poor quality?

Well, not always in a testing context, but there have been times when the question has been asked (and sometimes by me): “Just stop for a moment and consider how you think either personal bankruptcy or a five-year custodial sentence will improve your lifestyle.”

It certainly concentrates minds. Sometimes, the cost is more than just an entry in the notes to the annual accounts. Testers working in public service or regulated industry sectors may well have to consider that very question.

1 Like

Not quantify poor quality as such. But two environment that added more tangible monetary costs to the testing process. The first one was with a service that had a very clear connection between number of users and revenue. So there when we discussed bugs and risks we could translate that to “how many users are prevented to use the product because of this => how much $ / hour”. So instead of a abstract high / low system we could specifically say this is a $1.000.000 problem or this is a $100 problem. That helped the tester to determine what to focus on over time and any conversations about “should we fix it” became a breeze. It also forced the testers to think about both the probability and the impact. Like "if this occurs it prevents the user 100%, but it only happens for 1 in 100 so 1% of the total revenue is affected by it.
The other environment was during a first to market period and that environment had a lot of fines for delays. So which also helped focus any conversations about what to do and what needed to be fixed. Since there were if this is not done by this date to a level where they accept we will be fined like 50% of the prize of the deal etc.

In both cases converting abstract “important, high, low, often, urgent” into actual money numbers or real dates transforms all these kinds of conversations.

1 Like

From what I see, it’s always easier / more effective to show concrete examples.

James Bach has a nice talk on risk-based testing where he talks about Open Coding and Reverse Coding to uncover risk scenarios. With them, you can then estimate the likelihood and cost of incidents, showing how they can happen and what can be done to prevent/mitigate them.

And sometimes things that are hard to estimate are even easier to sell to people, e.g. some incident that caused a similar product to be banned in another country, that put the C-level executives in jail, or caused massive resignment of the company’s employees.

1 Like