What are your experiences of a 'Zero Bug' policy?

We’ve just released a new article from @prescottlewis707 who has shared his experience of working with a Zero Bug Policy:

Zero Bug Policy: The Myths And The Reality

I was curious to know what are other peoples experiences of Zero Bug policies. Have you tried one? How did it go? I know in the past I’ve worked with teams that have gone down that route, but it has created some confusion of what ‘Zero Bug’ actually means.


Unfortunately, I never worked in a place where this approach is applied, but I really like the idea of tackling the bugs as soon as possible and removing a lot of the wasted - such as bug reports, as strange as it might sound at first. I don’t think that this approach is possible for all companies, I remember seeing a few tweets by Allen Holub on this topic and I get the impression that this can be pulled off in a very very “Agile” culture.


I appreciate the approach and see its name misleading and therefore want it to be renamed.
Some people take its title way to serious.

Disclaimer: We have B2B product for which we are in frequent contact with your customers and have a trustfully relation. Also we know most of the users, at least the super-users personally. Its not thousands or millions of anonym people over the whole planet.

Depending how strong you interpret the approach my current company maybe does it already (to a certain degree and maybe not in all details): Bug-fixes first with less overhead as possible (e.g. treat bugs on open stories at this stories, not as new bugs.

On the other hand I know that we frequently go with (known/accepted) bugs into the UAT or even production.
It happens frequently that we tell our users to not use a specific function for a while and that we will deliver a hot-fix in the next time.

It think we value quality.
Sometimes is delivering a baseline of a new feature more important than delivering it with all variants working right from the start.
Having at least the basic function until a certain date is often a quality for our customer.
By that they can at least start to some thing on their own side. Otherwise they would to have postpone their own work.


I saw some people literally advocating for 0 bugs deliveries and find them having a basic misunderstanding.

At the very least: you product has always bugs, the difference is if you know about them.
So what might me possible is: “delivery with 0 known bugs”

As I pointed out before I’m even in an environment where delivery with know bugs is accept as quality just not relates only to (no) bugs.

Good old Jerry Weinberg: Quality is value to some person.
This is not only about (no) bugs.


It’s funny, I’ve never worked with this sort of policy before but we already adopt some of the ways of working. In particular;

having conversations instead of communicating through tickets

I’m a big fan of just talking through problems with developers and just coming up with a better way / solution on the fly that works for everyone.

Interesting read! I agree the name of the policy is misleading though. The word policy implies something enforced. It should just be called a ‘common sense’ approach, were issues found are either accepted, bugged or fixed as the development process progresses through the iteration.


Interesting! We currently have a ‘Zero Bug’ policy but it depends on the project.
We have a very good group of mature developers, they write unit tests and test themselves also.

All bugs found in the sprint are solved, even cosmetics, most of the time within a day or 2.

If at some point right before the release a new bug appears, and it’s a high/critical/exceptional bug, it will be solved that same sprint and otherwise it will move towards another sprint. (It will always be assessed: risk/impact)

We do still find other bugs that have been in the platform for “a little longer”… and they are moved towards the backlog, if a developer has ‘time left’ on his sprint they’ll pick up these bugs. (We really don’t have that many in our backlog)

If a bug appears in production (we are a security platform) and it has some what of an impact on something, it will be immediately get a hotfix and otherwise it will come with the next sprint. (Doesn’t happen to often)

It requires a very dedicated and mature team to do this to be honest.

What the “Zero Bug Policy” means is a bit … well… there isn’t really a clear description for us, but our way of working (as written above) is what we address it as :slight_smile:


I have never seen a zero-bug policy at my workplace.

However, I know that absolutes are dangerous.

Will share more thoughts, after I read the article.

I LOVE Lewis’ team’s approach to zero bug policy! I’ve worked on a few teams with a similar approach. We focused first on bug prevention, by using practices like TDD and ATDD. We did our best to find bugs before they went to production, with exploratory testing at the story and feature level, and with testing in production (safely, using release strategies like blue/green deploy and release feature toggles). None of the teams I was on managed to eliminate bugs happening in production, but we fixed the ones that escaped right away (or reverted the changes).

A few over the years.

First one was about breaking the build, you go a month without a good build and you need to do something. It was a sort of name and shame policy, you break the build you fix it before you go home.

Second one was a dysfunctional blame game that gave out rewards for no bugs found, yep lets not test and share the reward with developers. It was a long cycle to prod but generally a dysfunctional result.

Third and fourth were more scrumish agile variation.

If you find a bug that is important enough to fix, stop and fix it straight away. If its not important enough then its likely never going to be important enough. This one worked well.

Second variation of this was a very lean definition of bug. If acceptance criteria is met on a story there are no bugs. If you did happen across something else you felt was harmful its still not a bug but an opportunity to add to the backlog with an improvement idea. This one as a tester I am not keen on as really does not promote deep testing at all.

In all cases there is the idea of no “known” bugs or no known bugs that fit our current flexible definition of what a bug is, apart from the dysfunctional cases where management actually believed in the concept of there are no bugs including currently unknown ones.


Good article and concept :slight_smile:
Anything that improves quality from the outset fits with the concept of ‘early testing’ and the earlier you find something, cheaper it is to fix.
In my experience though, I would struggle with this type of implementation. I do believe there is a place to have ‘user stories’ and ‘defects/bugs’ as separate entities (from a traceability angle). There is maybe a danger of there being too much information in the user story (if that is where the issue is going to be reported) that some parts of it can and will be missed.


I summarised my approach to bug backlogs, connected to zero-bug policies, here:

Maybe it makes sense to you, maybe not.

Best regards,



I don’t think its ever possible for a project to have zero bugs, it can be reduced to 5% max due to human error and mostly dev and QA don’t work together, neither most of the organization follows the process or has proper documentation from the beginning of the project.

You never should buy-in into that :joy: . that person should first follow a RST course from Micheal Bolton/James Bach. All software has bugs or annoying behavior. The important thing is that the users are aware of issues and that the business have accepted the risks. Even Microsoft / google / browsers / Apps release every month updates

I don’t have any experience with this, however, this conversation led me to talk to my team and we have decided to adopt this method in terms of not having bugs in the backlog; instead we have user stories for areas of change in our production features.
The devs were most excited for the change as now all “production issues” would be a user story and thus have the same workflow and rules as their feature user stories.
I’m curious to see how this turns out in our team.


Why do I have the impression you both did not read the article?
Is it a matter of my subjective perception?

If not:
Despite the name it does not states anything like that.
The second sentence even is: A sound zero bug policy doesn’t mean the application has no bugs.
It also states “bugs found in production”.

I appreciate the approach, but I dislike its name.
How about (simplified) “Fix Bugs First”?
I know that it already has spread wild and changing the name is not a simple action.

1 Like

I read the post but sadly most of the company counts bugs in production as bug leakage, but changing the title would be good


Ok, then I got you wrong here.
You wanted to express that you agree with Lewis article, which gives the same message.

The approach that Lewis describes is a little different to what we’ve looked at (and are looking at again). I completely agree with the idea of cutting out steps when you find a bug whilst testing a story. We started having a sub-task type to “track” defects found in a story (my primary team loves its sub-tasks) but with no expectation of writing out a full bug report. It was just to show something is in progress on the board.

When it comes to bugs outside of what you’re actively working on, what we’ve looked to do is a little different. We do enter bugs to go on the backlog if it is something that we want to fix, but perhaps not in this sprint as it would jeapordise the sprint goal. If I find a bug that we have no expectation of fixing, I still like to enter a bug report and then get it closed as won’t fix. I gather that Lewis and his team wouldn’t enter anything here - or would update the original user story?

I think there’s value in making a conscious effort to say “we know there’s a quality issue but we are accepting it in the product” as opposed to “nothing to see here”. A good example was I found a bug in a product that hadn’t seen functional updates in about 18 months. I knew it wouldn’t be fixed so I logged it, told my team and we closed it straight away. Few months later and a customer hit the same issue, raising a support ticket. Because I’d raised the bug and someone in my team had looked over it, we knew exactly what the problem was. Side note: I was off work (or barely working, its been a stop-start year) and it was my colleague who remembered “oh yeah, I think Rich raised that, let me search”. That 20 mins to enter, discuss and close saved a heap of time trying to understand the customer’s explanation and reproduce the issue.
We also add “known issue” text for a lot of the bugs that we close as won’t fix to help support if a customer hits them. Unfortunately some idiot (i.e. me) forgot to publish the text this time, otherwise it would have saved more time!

I’m actually keen to advocate that every time we communicate our policy to other teams etc, we say that we are targeting a zero open bug policy because as others have said, the name can “zero bug” can be misleading.

Edit: Apologies - I never meant to ramble on so much >_<

Since, I belong to BugRaptors, which is one of the leading QA companies providing a dynamic range of software testing and QA solutions, we only work on zero defect leakage assurance.

To make it clearer, Defect leakage can be understood as a metric that highlights the efficiency of a QA test, which includes defects slipped during the QA process. To formulize,

Defect Leakage = Number of defects found during UAT / number of defects found during QA testing

Practically, it is quite difficult to attain a zero-bug outcome since it needs testers to dive through various scenarios, circumstances, and use cases, which are variable for different users. However, achieving zero defect leakage is possible as all the bugs found and worked out can be prevented to reproduce.

Thanks for sharing, @kanikavatsyayan.

That reminds me of @edfischer’s article: Three Ways To Measure Unit Testing Effectiveness

Eduardo highlights the following metric:

Test Case Effectiveness (TCE)
TCE = (BF)/(BF+BNF) * 100% where:
BF = number of Bugs Found as a result of execution of test cases
BNF = Bugs Not Found, which are bugs found in production that slipped through the test cases