I keep coming back to the idea that a lot of the real value of testing lives in things that never quite materialise. The risks that feel uncomfortable early, the odd behaviour that does not yet break anything, the assumptions that sound fine until you say them out loud. That ability to sense where things might go wrong before there is evidence feels like a core testing skill, but it is rarely named or rewarded.
In practice, this shows up as noticing weak signals. A flow that technically works but feels fragile. Logs that are noisy in a way that hints at future pain. Conversations where everyone agrees but no one is really aligned. This kind of work is not about execution, it is about reading the system and the people around it. When it goes well, nothing dramatic happens, which makes it easy to overlook.
One challenge is that this thinking often stays in people’s heads. We might log test cases and runs in tools like TestRail, Tuskr or Zephyr, but the why behind our exploration, the risks we were probing, and the assumptions we questioned can get lost. AI is getting very good at execution and pattern detection, but judgement under ambiguity still feels very human. Sensing risk early, testing to learn rather than to confirm, and helping teams feel uncertainty sooner rather than later is where testing often shines.
I am curious how others think about this. How do you develop this ability? How do you make it visible to teams and stakeholders when there is no obvious bug to point at? And how do you talk about this value without it sounding vague or abstract?
Is our real testing value the stuff that never becomes a bug?
Yes, this is exactly what shifting left is hoping to achieve. The cost of fixing something down the line is exponentially more expensive, so the true value of assuring quality is trying to do so as early as possible.
If you can find a problem before something even becomes a ticket then you have saved everybodies time in refinement, if you find it in refinement, you have saved the developer and tester time during the following stage as well. Even finding it last minute during regression saves a whole lot of headache and work.
How do you make it visible to teams and stakeholders when there is no obvious bug to point at?
A bug is something that bugs you. There should always be something to point at, even if you’re just pointing at the vibes you’re getting from something. If you’re saying that you think a requirement or acceptance criteria will be a bug when it is implemented, then congratulations because you’re seeing the bigger picture!
How do you develop this ability?
I think it’s purely experienced based, and approaching your job with different perspectives instead of keeping your thinking rigid.
A big part of testing is the story. You’re building and telling a story.
That story includes not just what you found but what you did, how you did it, and why that is valuable. What you could do, but won’t. What you want to do, but can’t. Risks, costs, testability issues. The product status, how I know about it, and why anyone should think I matter.
I think that the reason that this value isn’t sometimes seen is that this story isn’t being told. Either because a tester doesn’t know how to, or doesn’t feel like they should, or perhaps that people simply are not interested in hearing it. It may be that testers are dropped into systems that have concepts like reporting already handled - tools that “report” the “testing” designed by people who don’t understand testing nor have any interest in the subject. It could be that people think they already understand testing based on miseducation and vendor sales material.
Solving that is tricky, for sure, but I try to make what I do accessible and advertise that fact, and have some form of reporting available beyond the need for a claim no person could possibly make (“yes, the software is fine”) so that at least everyone’s aware that they could learn if they wanted to.
People who are close to the structure of the software, like developers, do see the changes they have to make, so have to understand the value of what we do to some degree. So there’s also something to be said for - value to whom?
Regarding externalising thinking there’s a few ways. Pair testing can be helpful to learn to test frame under scrutiny - why you are doing what you are doing, what you’re thinking, what you’re planning to do, why you’re applying your resources here. Note taking, or recording voice audio with a screen recorder are others. As you build the skill you can take it to your reporting and make what you do more accessible… but making your audience receptive can be trickier. It may be that taking what you’re not doing to them shows them that you are sampling. Taking them testability questions shows that you’re trying to make the product cheaper and easier to test.
As Cameron mentioned in the comment above, this is what shifting left aims to achieve. But Shift Left has two spices (maybe more, but these two I met personally). One is when the QA team gets involved in the early stages of the project, hopefully around requirements definition. Another is when there is no QA team anymore, there are no differences between QA and developers, and the Development team lead has complete responsibility for Quality. Surprisingly (or not), in the second species, QA gets many more abilities to influence and be “inside the process.”
About bugs, a nice outcome of DORA research is that teams should not be measured by the number of bugs (both programmers and QA, so it works in both directions). Measurement of quality becomes something other than the number of opened bugs.
Just to mention, properly trained and with good context, a chatbot could provide sufficient risk analysis.
For me, it is down to empathising with the user experience. My career has bounced from being a user of software products, Supporter of software products, Developer of software products, Tester of software products and eventually a manager of teams that develop and test software products. (Its more complicated than that but I’ll keep it simple )
So with all that background I can read stories and look at software empathising with each of those roles. I can take a step back away from the story, the code and focus on “Would this solve my business problem? How would I achieve the same without it?”
So that background helps me see risk beyond the lifecycle. To communicate that to teams and stakeholders I would (a) Be open about the risks you perceive at every level (b) Communicate it as risk, not as a criticism (c) Talk about the risks with a genuine passion that this product could be amazing (d) Be comfortable that I could be wrong and haven’t seen all the pieces (e) Be thrilled if my conversations have proven my theories about the risks wrong
It takes work to communicate like that, every day. Some days for example, we may slip into criticism quite naturally for a number of reasons. But if you keep working at it, it’ll influence more open communication around risk from everyone involved.