Why do testers miss bugs?

A few times lately I’ve nearly fallen foul of the simplicity of the change. It it’s only a line or 2 of code changing I start by thinking a code review will be sufficient. I can see what the code will do so it’s fine…

I then exercise the code and realise all is not as it seems and there are the bugs.


Any organization providing qa testing services thrive for a quality product. However, sometimes testers miss bugs, and below could be possible reasons :

  1. Time constraint : Testing to be done in a limited period of time and numerous configurations need to be validated. Therefore, only the most commonly used configurations are verified by the testers.

  2. Limited QA personnel within the team: Application grows with time and hence QA team needs to increase in size as well.

  3. Unawareness of the application: New QA personnel working in testing the application which lacks complete knowledge of the product.


I get what you were saying so it’s great to share your opinion. I often still hear “so why wasn’t this tested” rather than “what can we learn from this” even when there’s meant to be a team responsibility for P1 customer bugs.


Skipping for a moment who is missing the bugs and focusing on what we, collectively, can do to miss less bugs.

Themes for where tricky bugs can hide:

  • Configuration and environment management
  • System integration, with other internal systems
  • Interaction and integration with external systems
  • Changes in behaviour introduced by updating versions of third party dependencies

Why are these bugs easy to miss? They are often:

  • Considered out of scope for testers
  • Missing from specifications/stories and design or architecture docs
  • Not under control of the development team as a wider group
  • May change without a story or planned work
  • Might change as part of a process that isn’t peer reviewed, is outside of source control and change control*

*Or these processes don’t get involvement/engagement from the development team.


I’m still sticking with my original narrow set of blames pointed out by @andreas7117 , viz.

  • unclear/unknown requirements
  • unknown code changes
  • no understanding of the business
  • no understanding of the technical surroundings/environment

I’m loathe to drop the whole dictionary on this one, because listing causes of a P1 being undetected, and I think this is what we are talking about here: P1=customer reported defect that has no trivial workaround. We are all relying on automation to catch most bugs, but for example MacOS “dark mode”, is not automated tested in your system, is it? Or…Much like 3rd party interactions like when a overseas cellular network operator takes ipv4-ipv6 shortcuts that break your app, it’s fatal, but something you could never have tested for without in depth knowledge!

Often, I want to stop messing about with long lists, mindmaps and threat diagrams. Check this clip by a fellow called Simon Wardley, about building real maps to map a problem space. SEACON:UK 2019 An Introduction to Wardley Maps - YouTube


Why Didn’t We Catch That in QA

Good blog post above from Michael Bolton.