Had to make trade offs to ship by a deadline, so made choices on what was a more likely area to be buggy, or an area that has more business value that we decided to test more.
only basic knowledge of a test technique. I once misinterpreted the numbers.
missing personas. If a user can only use websites and does not know about system administrator stuff, what could happpen?
no good grasp of the functionality. E.g. the audit log contains interactions of the users with the system. According to GDPR EU citizens can ask who read their personal data. Are these actions recorded?
take it for granted. The test passed, but maybe something else went wrong.
Sometimes I think of the test I should run, but talk myself out of actually doing it. Usually because the setup is complex and I decide the chances of there being a bug isnāt worth the time. Always run the test.
Other times the memory cheats and I would have sworn I ran a test that should have found a bug, but obviously didnāt.
This question sounds a little like the āthere are only 2 types of programming errorā statement, one which I personally identify with. So why no similar maxim for tester experiences?
Iām tempted to boil down or re-hash the 4 responses by @andreas7117 Andreas Pook above though. They most closely match my best excuses.
A few thoughtsā¦1. Focusing on test automation and forgetting about exploratory. I think itās a fine balance between automation/exploratory at times. If you focus on one, then the other may suffer. 2. Or maybe when focusing on functional and forgetting the non functional. 3. Time pressures causing you to cut corners.
Adding on to the discussion here with a few from my experience:
So much to test, so little time - I usually work with software that has hundreds of possible configurations, many of them with significant impact on the flow of data through the software. Itās not possible to test all of them, so typically testers cover the most common configuration sets and maybe the most error-prone if thereās time. Unfortunately, that means that sometimes bugs that only show up with a specific configuration donāt get caught.
It actually doesnāt work on my machine - It can be challenging to reproduce some intermittent problems, particularly the ones that happen to customers a lot but are never seen in test despite the best efforts of the tester/test team. Sometimes this is because of incomplete information, sometimes itās because itās physically impossible for the test team to reproduce the customerās environment. For example, one customer of one of my previous employers ran the software suite over a wireless WAN that covered several thousand square kilometers. We had no way to reproduce the issues caused by connections dropping intermittently and relied on logging to trace problems.
The chaos effect - No matter how well tested or well engineered the software is, customers and users will always find ways to do things nobody ever thought theyād do. Iāve never seen a web application that requires a login and stores user information handle multiple browser tabs well, but people will do that and then some will wonder why things donāt work right. Anything that involves running things in the background will have issues if the process is killed mid-task. Then thereās what happens when a user overloads their computerās ability to handle requests (this is something I do regularly, so Iām familiar with the results). These⦠interesting results often get reported as bugs.
As āmissingā is quite a placeholder for many scenarios Iād play the game and give a few examples, in random order:
Not everyone has the same definition of bug
Bug is not reported;
But is not observed;
Bug canāt be explained;
Quality is subjective and you donāt know who values what - Quality is value to some person(Jerry Weinberg);
Not looking for bugs;
Not looking hard, wide, deep enough for bugs;
Not looking for bugs because they donāt matter;
Some companies or managers are ok with releasing features with bugs;
The testers are not skilled enough in all things testing related;
The business domain, the technological domain, the product, project, etc. are unknown or not well enough;
The bug is not in the part of the product thatās the responsibility of the tester;
The tester is not required to find bugs, but tasked with other things;
Bugs are environment, timing related, very specific;
The tester is avoided - people release things without asking the tester for an evaluation;
Thereās not enough evidence to submit a bug;
The bug could not be reproduced;
The bug is lost in communication; Not every manager has time to go through dozens, hundreds of bug reports documents/backlogs;
The bug was found and intentionally released, but whenever the stakeholder/manager found it they blamed the tester - as they expect all/most bugs to be fixed;
Weak description of the bug lead to misinterpretation of the problem;
The bug is a feature that has not been made available yet;
The bug has been reported, but rejected and lost in history until it resurfaced in production and suddenly became important;
The bug was linked to a feature thatās being researched by the Business & Product Management so it was not reported;
The bug was covered by another bug; when testing the bug-fix there was no deep investigation into further problems;
The bug was found - reported, fixed; but fix has not been released in production;
There was no tester to test the product;
There was no responsible tester or test manager for the testing of the product;
The product or environment wasnāt testable, observable, controllable, configurable;
The bug was related to external factors that were not considered in the testerās responsibility - example unavailability of Internet Provider, Data Center, External API providers, Database or Server crash or drop;
The tester was in vacation and the replacement/temp role testers werenāt able to keep up the work;
The tester was annoyed about something, so decided to let the bugs fail;
The tester introduced the bug by mistake or on purpose;
There was none to report to;
The tester was limited - he didnāt have access to the product, environment, tools, data to test, or was distracted, or moved to another higher priority product/project;
The bug was in another system managed by other team, company - their bug released caused a mishandling of the product which was in the scope of the tester;
These ones sting me the most. To miss something is human. But to realise something was staring you in the face, and if you had been in a good frame of mind youād have seen it⦠feels bad
I bump against the question, as it continues to promote the idea that testers are the ones responsible for finding bugs. Answering more generally for a culture where the team owns quality, i.e. a general why do bugs make it to production:
incomplete or vague requirements
too much reliance on unit testing/not enough system level testing
not considering usability, security, or other unstated but implied requirements
team lacks experience to consider things contextually, and instead coded strictly to the requirements they got
Oh, yes. If I had a dollar for every time Iāve seen a browser bug reported as a bug in my companyās software, Iād have a lot more dollars than I do now.
Thatās not including bugs in third party applications we interface with, changes to said third party applications that we werenāt notified about breaking our interfaces, and customers reporting their malware infections as bugs in our softwareā¦
In my experience, if something bad happens while a customer is using our application, itās our fault. About the only version of that I havenāt seen is being blamed for a power outage.
Kate, I was actually referring to either a third-party application the company had paid good money to use but never bothered performing user acceptance testing on, or an application/website that another team in the company was working on but where their testing (if any) had missed a bug (such as a dead link). I suspect there are as many examples of this sort of thing as there are testers!
I think the testerās role is always to consider scope vs depth.
Given a limited set of time and resources, I could test every function lightly, or I could test the most commonly used functions to their fullest extent.
I think itās a fine art learning what to area test first, and how deeply.
So many things have been named already. But I think one point is still missing: the self-centered perspective
I was helping out with a US developed and tested product once and it was full with bugs that stared me in the face but never registered with the others in years.
The most common reason Iāve seen for missing bugs is, to put it simply, there were too many bugs waiting to be found. If bugs are easy to find you end up listing bug after bug and at some point go into shut down instead of looking for the really sneaky bugs. I encourage my testers to reject a piece of work if itās so untidy that theyāre finding bugs in the basic application of standards (such as the way labels are written, basic validation message text being incorrect, fields being out of alignment) - if the software is that scruffy theyāre going to be distracted by the easy to find problems and will most likely miss something crucial. Iāve recently been looking at a few third party applications that the company Iām in are using - so much scruffiness that Iām distracted from learning the products because of the number of faults and inconsistencies Iām seeing (always a problem for a tester when they get hold of someone elseās product). There should always be bugs but they should need some effort to find - too much too easy and you miss the important stuff.
To be honest, Iām really surprised to see this kind of particular question on a place like MoT. I thought we were already way past the whole testing == finding bugs == the responsibility of the tester stageā¦
I mean sure, bugs are also part of my job but they are definitely a by product of my other activities and never a goal. Very much in the same way developers encounter bugs during their day-to-day activities.
Iām part of a team and if we ship something thatās crap itās on the whole team. It doesnāt matter where we might have missed something. It matters how weāre going to make sure that it doesnāt happen again.
Again, bit surprised and maybe even disappointed in MoT for questions like this. Whatās next, asking how you make sure your software is bug free?
Hi @hylke sorry you feel that way. Itās a question that a lot of our Slack members have been asked by their team/management so giving it a place on The Club that these people can point those asking the question to is a positive thing to help them.
As you have pointed out, quality is everyoneās responsibility but unfortunately, there are still a lot of people on teams who arenāt quite at that belief yet so we still have to do all we can to help them
Fair enough, I might have been a bit short-sighted on this one. Didnāt realise the huge amount of work we still have to do in order to get organisations into the right mindset (team responsibility, etc).
If anybody needs some pointers/ammo to help with that, you know where to find me now
First of all - because of the 7 test principles and especially #3ā¦
And bugs can also be so many things (defects, misunderstandings, bad requirements, bad usability, etc.). More interesting to ask why the TEAM miss discovering DEFECTS (i.e. where the code is actually failing)ā¦!