When did "we need 100% coverage" become the default answer to every quality concern?

At some point in almost every team I’ve worked with, test coverage becomes a number people chase rather than a problem it was meant to solve. A release goes badly, someone in leadership asks about coverage, and suddenly the team is under pressure to hit 80%, 90%, whatever feels safe to say in a meeting. The actual quality risk that caused the incident rarely gets that same attention.

I’ve been thinking about this more lately because I’ve seen teams spend weeks writing tests to hit a coverage target while their most critical user journeys had maybe two or three tests covering them, none of which ran regularly enough to catch anything useful. The number looked good. The product didn’t behave better.

The harder conversation is about what coverage is actually supposed to tell you. In my experience, teams that track run history and failure patterns over time end up with a much more honest picture of where their gaps are than teams optimizing for a static percentage. A test that runs every build and fails meaningfully is worth more than ten that pad the number.

Have you found ways to reframe the coverage conversation with stakeholders that actually stick? And at what point do you push back versus just writing the tests people are asking for?

2 Likes

To me, it’s part of the organization culture if people feel more for covering the code - and moreover themselves - rather than having the painful but meaningful discussions. As long as the dashboard is green… > @linda1

1 Like

I think that 100% coverage should really just be for developers and automation engineers and not for testers at all. They can then decide what its actually 100% of what, usually clearly documented known very well things like pre-written and agreed acceptance criteria.

For testers they should talk more about risk coverage, which one’s and depth of coverage. It would be absurd for that to be 100% so it’s a very different conversation.

4 Likes

“100%, of which problem-space” is my stock response to this. I wish I could turn the answer into something catchier, because today, more than ever, the problem-space is changing not only size, but also changing shape very rapidly. Unless you remove security and a few other things, you are wanting to chase a moving target that moves faster than your requirements do.

100% code coverage, is only useful if the “thing” customers consume is in fact code, 97% of us here do not ship code, we ship experiences. @andrewkelly2555 is right, we mitigate risk by growing our knowledge. But if we limit our knowledge to our code and exclude the customer experience, we actually know very little, and besides, nobody is asking questions of the code “behaviour” or correctness on a daily basis anyway. But they are asking questions about the experiences customers have ; with polls, reviews and marketing mail-shoots every month.

Right now I’d be happy with 50%.

1 Like

I have theoretical knowledge of ‘100% coverage’ - you know, landing modules onto the moon or performing literal life and death functionality etc… if management truly wants bug free software, obviously they need to pay for it and it is up to the devs and QA (assuming there is a split) to make stakeholders aware of the costs. Bug free software!!!

We’re mostly talking code coverage here. Which is fine, but even a 100% coverage of code isn’t going to give you real coverage. Are all possible outcomes of functions covered even a fail? Or is there just a quick happy path test to confirm it is working but not how it is working?
And we’re not even talking about the coverage of requirements here.

There have always been people who have asked for 100% test coverage. I found it useful to ask what they mean by 100% coverage, so that you can have a discussion about what is useful.

@Matt_Calder In my experience it really depends on the people in the org but ideally, I found that explaining something in terms of what I call branch space vs state space(I’m not referring to a branch in the terms of a git branch but the actual paths the code take).

For example,

def divide(a, b):
    return a / b

This, somewhat useless, python function just divides a with b. And being able to reach 100% coverage in terms of its branch space is as simple as:

assert divide(1000, 10) == 100

That module would now have reached 100% test coverage, in terms of its branch space. But the problem would be that its state space is almost entirely uncovered, for example this would make it crash:
divide(1, 0)