How do you find valuable bugs?

#❓ Quick Question

How quickly do you know if a bug you've found is high value?

Have you have dismissed a bug as small of insignificant, only for it to come back bigger later on?

Have you ever found a whopper, that turned out to be insignificant?

A comment from @deament about bug bash sessions encouraging engagement and fresh eyes has me thinking. I probably avoid them because I assume they will find lower value bugs that introduce noise. But I could be very wrong.

Maybe we don't need the focus of a charter based testing mission, and the structure of an ensemble session to find good bugs? What am I missing out on by not using ever available option?


From the top of my head, I haven’t often known that a bug is high value, in a quick way.

I do remember, however, dismissing a few bugs that seemed insignificant when we investigated it, but then later realised that we only scratched the surface and didn’t dig deep enough - it turned out to be a symptom of a much larger problem.

It helps when the person, who writes the bug, has followed some sort of template on how to write a bug - as it makes investigation a lot easier. At bug bashes (or even just user-found bugs), they can be a lot harder to investigate since they can word things strangely and don’t tend to clearly show what the issue is. There’s often just an indication that something is wrong.


So many security bugs that people said ‘Who would ever do that???’ :stuck_out_tongue:
Some bugs are so small as in the probability has such a small chance but the impact is so huge, people often dismissed it.

The story often goes like this:

“Oops we got hacked” or “why did production crash?”

Me: Hey, I logged and scaled that in a ticket 6 months ago, but somebody said it was no prio and they said “Who would ever do that???”

PM: Link me that ticket

Me: +links+

PM: *** sees his own name ***

Me: ***Leaves the room to hide my huge smile ***

I wish I could say this only happened once or twice… :confused:
Doesn’t even need to be a security bug, other bugs are valid also.


Perfect example that quality is a team/organization effort, and escaped defects is everyone’s responsibilities.

Also, the small probabilities bugs always annoy me a bit. Say the bug can happen 1% of the time and you have 1 million customers, that’s 10000 potential customers that can be impacted (based on napkin math).

Back to the original question, exploratory testing is probably when I found the most high value bugs. What you said about “bugs that seem small but are actually significant” is true, and why I think it’s best to log any bugs first.


How quickly do you know if a bug you’ve found is high value?

Depends entirely on context and knowledge.

Firstly, value to whom? A good tester should be looking into the values of the company and people who work within it, and the values of the people who use the product. i.e. test clients. That will help to inform whether there is any threat to the perceived quality of the product.

Have you have dismissed a bug as small of insignificant, only for it to come back bigger later on?
Have you ever found a whopper, that turned out to be insignificant?

During bug investigation I will change my mind on a bug multiple times as I find new information. Knowing how a bug can be replicated, what triggers it, what doesn’t trigger it, where the vulnerability might be, all may change how often I believe it’ll happen, what functionality is affected or blocked, who will suffer, and so on.

Also depends on the nature and complexity of the problem I find. It could be that the cause of the problem is not tied to any particular functionality, like typos.

Bugs are also places for other bugs to hide. If they are fixed they introduce change risk. If they block functionality they create workarounds we haven’t thought of, or change the workflow in a way we might not predict, or change input methods that have different input validation or data limitations. Because bugs are a difference between what we think the product should do versus what it actually does it also creates work when other testers find it again and have to search the records for its existence (or investigate and report it again) and it may be subject to change risk in unusual ways - depending on the connection to data or functionality it may behave in ways that change the nature of the risk involved as functionality and data change or are added later.

Bugs found by users also erode the confidence in the product’s working functionality. If a company can’t spell “password” correctly on the login page then how safe is my personal data? If a product behaves as if it’s faulty I will question its ability to reliably solve my problem. The user perspective is not the functional capability, logical map one of a developer. I cannot think of anything a frustrated or disappointed user cares about less than the opinion of the development team, and they are the ones who pay for the product, recommend it, review it, discuss it with people and log expensive support tickets.

All I can do as a tester is to report my findings and my evaluation with the information I have available and permit people to make a design decision about it. If I do my best and get it wrong then all I can do is learn from that and try to fold that new information into future testing as I learn what people value and the level of their expectations.

A comment from @deament about bug bash sessions encouraging engagement and fresh eyes has me thinking. I probably avoid them because I assume they will find lower value bugs that introduce noise. But I could be very wrong.

Again, lower value to whom? The advantage of a variation in perspective means that you will learn what people value. Perhaps the functionality wasn’t even seen as buggy until users or support staff point out that it is. You can also find maps of concentration of where bugs are found during this process and identify places for further investigation. The value of the activity will depend on context.

Maybe we don’t need the focus of a charter based testing mission

I don’t think I understand the question from this point on, because I cannot imagine a more flexibly defocusable structure than testing against charters. There’s nothing intrinsically focused about charters. The first charters I do are usually intake and recon sessions to map out the possibilities and high-level test project issues with titles like “take a look at this and see what’s up” that start the adventure towards more informed, focused missions as I begin to form an understanding of the layout, interfaces, data and functionality. Lots of people testing can be a charter, too. It’s just to inform the mission of that session to control the sprawl of the investigation.

The costs of using a lot of people are the summation of their time and the work involved in interpreting the thoughts and artefact output of amateurs into a usable format to best learn from the results. Also herding that sort of project is not easy and takes preparation so as not to waste their valuable time and effort. If it’s available, possible and considered inexpensive for the return then I’m all for it.

1 Like

This fits right in with a now rather elderly post (it was on MSDN, and is now part of their archived content) about perceived quality: Perceived vs. Objective Quality | Microsoft Learn

The gist of it is that if the core parts of software are rock solid, you can afford to leave bugs and other problems (like poor usability, bad performance, and the like) unfixed for a while because most of your users will never encounter them. I can personally attest to this, having stumbled across (and reported, with low priority) rather a lot of bugs with potentially horrible side effects, but in such obscure and rarely used parts of the system I considered it unlikely any actual users would run into them. Then several years later, tested the fixes because during a quiet time once all the high priority work was done the lower priority things got looked at.

I’ve certainly had bugs I’ve raised dismissed with “the user would never do that” and within 2 days had to talk the user through fixing their data because they did “that” and messed up the referential integrity of the data sync in the process (the hardest part was biting my tongue on my very real desire to say “I told you so”).

I’ve raised bugs I thought were minor, only to find out they would be utterly catastrophic should they ever occur so they needed to get fixed to make sure they never did happen. And bugs I thought were major that turned out to be rather less serious than I knew.

What I’ve found is that the better I know the software I’m testing and how it’s used, the more likely my perspective of how valuable the bug is will match what the rest of the business thinks.


I like to determine how critical that functionality is to the business, what customers are impacted, what the impact is to the customers is, and what the ROI on addressing the bug is. Risk can also be worked into this (high-risk bugs are seen as having a higher impact).
I try to avoid cases where the person who is the loudest gets their bug addressed before those who are more impactful.

1 Like