Why do testers miss bugs?

We’re human and ultimately, we will miss bugs from time to time. If you had to give reasons for why testers miss bugs, what would you suggest?

@tybar suggested:

  • Had to make trade offs to ship by a deadline, so made choices on what was a more likely area to be buggy, or an area that has more business value that we decided to test more.

@andreas7117 offered:

  • unclear/unknown requirements
  • unknown code changes
  • no understanding of the business
  • no understanding of the technical surroundings/environment

@proffy76 added:

  • phenomenon known as inattentional blindness (I’ve experienced that myself too many times :rofl:)
  • multitasking and/or context switching, especially when priorities keep changing all the time

What would you add to the list?

  • unaware of a test technique.
  • only basic knowledge of a test technique. I once misinterpreted the numbers.
  • missing personas. If a user can only use websites and does not know about system administrator stuff, what could happpen?
  • no good grasp of the functionality. E.g. the audit log contains interactions of the users with the system. According to GDPR EU citizens can ask who read their personal data. Are these actions recorded?
  • take it for granted. The test passed, but maybe something else went wrong.

Sometimes I think of the test I should run, but talk myself out of actually doing it. Usually because the setup is complex and I decide the chances of there being a bug isn’t worth the time. Always run the test.

Other times the memory cheats and I would have sworn I ran a test that should have found a bug, but obviously didn’t.


This question sounds a little like the “there are only 2 types of programming error” statement, one which I personally identify with. So why no similar maxim for tester experiences?

I’m tempted to boil down or re-hash the 4 responses by @andreas7117 Andreas Pook above though. They most closely match my best excuses.


A few thoughts…1. Focusing on test automation and forgetting about exploratory. I think it’s a fine balance between automation/exploratory at times. If you focus on one, then the other may suffer. 2. Or maybe when focusing on functional and forgetting the non functional. 3. Time pressures causing you to cut corners.


Adding on to the discussion here with a few from my experience:

  • So much to test, so little time - I usually work with software that has hundreds of possible configurations, many of them with significant impact on the flow of data through the software. It’s not possible to test all of them, so typically testers cover the most common configuration sets and maybe the most error-prone if there’s time. Unfortunately, that means that sometimes bugs that only show up with a specific configuration don’t get caught.
  • It actually doesn’t work on my machine - It can be challenging to reproduce some intermittent problems, particularly the ones that happen to customers a lot but are never seen in test despite the best efforts of the tester/test team. Sometimes this is because of incomplete information, sometimes it’s because it’s physically impossible for the test team to reproduce the customer’s environment. For example, one customer of one of my previous employers ran the software suite over a wireless WAN that covered several thousand square kilometers. We had no way to reproduce the issues caused by connections dropping intermittently and relied on logging to trace problems.
  • The chaos effect - No matter how well tested or well engineered the software is, customers and users will always find ways to do things nobody ever thought they’d do. I’ve never seen a web application that requires a login and stores user information handle multiple browser tabs well, but people will do that and then some will wonder why things don’t work right. Anything that involves running things in the background will have issues if the process is killed mid-task. Then there’s what happens when a user overloads their computer’s ability to handle requests (this is something I do regularly, so I’m familiar with the results). These… interesting results often get reported as bugs.

As ‘missing’ is quite a placeholder for many scenarios I’d play the game and give a few examples, in random order:

  • Not everyone has the same definition of bug
  • Bug is not reported;
  • But is not observed;
  • Bug can’t be explained;
  • Quality is subjective and you don’t know who values what - Quality is value to some person(Jerry Weinberg);
  • Not looking for bugs;
  • Not looking hard, wide, deep enough for bugs;
  • Not looking for bugs because they don’t matter;
  • Some companies or managers are ok with releasing features with bugs;
  • The testers are not skilled enough in all things testing related;
  • The business domain, the technological domain, the product, project, etc. are unknown or not well enough;
  • The bug is not in the part of the product that’s the responsibility of the tester;
  • The tester is not required to find bugs, but tasked with other things;
  • Bugs are environment, timing related, very specific;
  • The tester is avoided - people release things without asking the tester for an evaluation;
  • There’s not enough evidence to submit a bug;
  • The bug could not be reproduced;
  • The bug is lost in communication; Not every manager has time to go through dozens, hundreds of bug reports documents/backlogs;
  • The bug was found and intentionally released, but whenever the stakeholder/manager found it they blamed the tester - as they expect all/most bugs to be fixed;
  • Weak description of the bug lead to misinterpretation of the problem;
  • The bug is a feature that has not been made available yet;
  • The bug has been reported, but rejected and lost in history until it resurfaced in production and suddenly became important;
  • The bug was linked to a feature that’s being researched by the Business & Product Management so it was not reported;
  • The bug was covered by another bug; when testing the bug-fix there was no deep investigation into further problems;
  • The bug was found - reported, fixed; but fix has not been released in production;
  • There was no tester to test the product;
  • There was no responsible tester or test manager for the testing of the product;
  • The product or environment wasn’t testable, observable, controllable, configurable;
  • The bug was related to external factors that were not considered in the tester’s responsibility - example unavailability of Internet Provider, Data Center, External API providers, Database or Server crash or drop;
  • The tester was in vacation and the replacement/temp role testers weren’t able to keep up the work;
  • The tester was annoyed about something, so decided to let the bugs fail;
  • The tester introduced the bug by mistake or on purpose;
  • There was none to report to;
  • The tester was limited - he didn’t have access to the product, environment, tools, data to test, or was distracted, or moved to another higher priority product/project;
  • The bug was in another system managed by other team, company - their bug released caused a mishandling of the product which was in the scope of the tester;

These ones sting me the most. To miss something is human. But to realise something was staring you in the face, and if you had been in a good frame of mind you’d have seen it… feels bad


I bump against the question, as it continues to promote the idea that testers are the ones responsible for finding bugs. Answering more generally for a culture where the team owns quality, i.e. a general why do bugs make it to production:

  • incomplete or vague requirements
  • too much reliance on unit testing/not enough system level testing
  • not considering usability, security, or other unstated but implied requirements
  • team lacks experience to consider things contextually, and instead coded strictly to the requirements they got
  • multitasking and/or context switching, especially when priorities keep changing all the time
  • I agree with this.

“It’s not a bug, it’s just a feature that’s failed far earlier than anticipated.” Yes, that was an actual conversation with a developer.

And you could add “The bug is in an application which it is not the tester’s job to test.” I’ve had that conversation before, as well.


Oh, yes. If I had a dollar for every time I’ve seen a browser bug reported as a bug in my company’s software, I’d have a lot more dollars than I do now.

That’s not including bugs in third party applications we interface with, changes to said third party applications that we weren’t notified about breaking our interfaces, and customers reporting their malware infections as bugs in our software…

In my experience, if something bad happens while a customer is using our application, it’s our fault. About the only version of that I haven’t seen is being blamed for a power outage.


Kate, I was actually referring to either a third-party application the company had paid good money to use but never bothered performing user acceptance testing on, or an application/website that another team in the company was working on but where their testing (if any) had missed a bug (such as a dead link). I suspect there are as many examples of this sort of thing as there are testers!


I think the tester’s role is always to consider scope vs depth.
Given a limited set of time and resources, I could test every function lightly, or I could test the most commonly used functions to their fullest extent.

I think it’s a fine art learning what to area test first, and how deeply.


So many things have been named already. But I think one point is still missing: the self-centered perspective

I was helping out with a US developed and tested product once and it was full with bugs that stared me in the face but never registered with the others in years.

  • UFT-8 support and right to left languages
  • layout crappy in other longer languages
  • other browser shortcuts in different languages
  • different date formats

The most common reason I’ve seen for missing bugs is, to put it simply, there were too many bugs waiting to be found. If bugs are easy to find you end up listing bug after bug and at some point go into shut down instead of looking for the really sneaky bugs. I encourage my testers to reject a piece of work if it’s so untidy that they’re finding bugs in the basic application of standards (such as the way labels are written, basic validation message text being incorrect, fields being out of alignment) - if the software is that scruffy they’re going to be distracted by the easy to find problems and will most likely miss something crucial. I’ve recently been looking at a few third party applications that the company I’m in are using - so much scruffiness that I’m distracted from learning the products because of the number of faults and inconsistencies I’m seeing (always a problem for a tester when they get hold of someone else’s product). There should always be bugs but they should need some effort to find - too much too easy and you miss the important stuff.


To be honest, I’m really surprised to see this kind of particular question on a place like MoT. I thought we were already way past the whole testing == finding bugs == the responsibility of the tester stage…

I mean sure, bugs are also part of my job but they are definitely a by product of my other activities and never a goal. Very much in the same way developers encounter bugs during their day-to-day activities.

I’m part of a team and if we ship something that’s crap it’s on the whole team. It doesn’t matter where we might have missed something. It matters how we’re going to make sure that it doesn’t happen again.

Again, bit surprised and maybe even disappointed in MoT for questions like this. What’s next, asking how you make sure your software is bug free?


Hi @hylke sorry you feel that way. It’s a question that a lot of our Slack members have been asked by their team/management so giving it a place on The Club that these people can point those asking the question to is a positive thing to help them.

As you have pointed out, quality is everyone’s responsibility but unfortunately, there are still a lot of people on teams who aren’t quite at that belief yet so we still have to do all we can to help them :slight_smile:


Fair enough, I might have been a bit short-sighted on this one. Didn’t realise the huge amount of work we still have to do in order to get organisations into the right mindset (team responsibility, etc).

If anybody needs some pointers/ammo to help with that, you know where to find me now :grimacing:


First of all - because of the 7 test principles and especially #3

And bugs can also be so many things (defects, misunderstandings, bad requirements, bad usability, etc.). More interesting to ask why the TEAM miss discovering DEFECTS (i.e. where the code is actually failing)…!