Do testers need to know how a feature is implemented?

To test a feature or thing better, do testers need to know how it is implemented ? Do you see any advantages or disadvantages in this approach ? Do you have any examples which shows it helps or not ?

PS -
I can think of one disadvantage already. If the underlying implementation changes a little and a tester does not know about it, then they might miss bugs.


Once you know about the implementation, it allows you to test (check) for known failure domains common to that tech stack and to the implementation pattern. Domain problems such as stress test failures a tech stack or algorithm suffers a lot from - you do stress test don’t you? You can also test for environment changes and impacts, redundancy and resilience using tried and tested tooling. Testing even, how the cost to deliver or maintain the system could spiral if abused or hacked. Testing for known security vuln’s in that stack used for example, these are all good things, but are all things that the coder should really be testing too. We sometimes term this white-box, a term I hate, since it often looks more like an entire grocery trolley.

None of these however are testing the thing the customer sees in front of them. so sometimes, white-box is a distraction from testing of the entire “actual thing” that is being sold.

/edit: So I guess I’m saying it is helpful, but it’s even more helpful to use this knowledge as a good way to separate your testing activities.


I only see advantages. It’s like reading the code/changes in the PR.
If you can see they use a specific framework like ImageMagick: ImageMagick – Convert, Edit, or Compose Digital Images then you can start looking for ways to exploit that system:

So knowing the frameworks & libraries will bring advantages to your testing scene.

I think we kind of are describing the difference between black box & white box testing here?


Both @conrad.braam and @kristof seem to be pointing towards ways they can use the extra knowledge to break the system, but in my mind, the big advantage is that knowing implementation details, and even better, the design, pushes testing way left. If I’m reading the code and can see that in all the codes paths except one, they’re doing a .lower(), that’s a bug that can be reported and fixed.

Even for testers who can’t read code, knowing the design and implementation is super useful since they can talk about it with the devs and raise what-ifs, edge cases, etc early on. Definitely a trust-but-verify attitude in the case when you can’t read code and you have the devs saying “yep, already covered that”, but still plenty of advantages.


@raghu I think what you are really asking is, “Is it necessary to know implementation details for good testing?” also, “Can you test without knowing implementation details?”


I’m just wary of slipping into the “testers need to learn to read code” camp. We have so many camps as testers in the online world pressurizing us to gain new skills already. If you are a good tester you already use context to guide you and to decide which skills need “upping” anyway, and will learn what must, to provide value. I have seen damn good testers who don’t know how to “code”, yeah sure I encourage them to learn to understand code. But defects that occur at the system and environment level often have less to do with code inspection and component inspection then we might think. Having fewer incentives to get distracted into a world you don’t really understand is sometimes just that. There are no absolutes. “Need to know” helps, but it’s not the only way in every context.

So, metaphorically, having a person in “that field” looking for lost sheep is good, because if testers stop looking in a specific field for bugs, guess what! Further, if people go bug “hunting” only in the code, they are rarely going to find all of the system level defects in code, even though all the bugs are in the “code”, that’s not the way to discover all of them. Bugs always seem to group up and congregate almost, almost like wild animals. So if a specific watering hole presents a chance to find defects, keep coming back to that hole until the process gap that caused it, closes up.


What a brilliant question, @raghu.

I don’t think a tester needs to know how something is implemented to test it.

There’s an excellent opportunity here for a tester to run a time-boxed exploratory testing session with the goal of discovering useful information about what they could explore next.

It’s kinda like a high-level reconnaissance mission to document observations about the thing to be explored. The document could include things like potential risks, further questions, problems, and good stuff to call out.

It might be at that point where the documented questions lead to a desire to learn about implementation. For example, you might observe and note:

I’ve seen the same generic error message that I think would be more helpful if they were more specific. How are errors handled? How does the application choose the right error message? Would you mind sharing a list of all error messages?

Explore > Capture notes, including questions about implementation > Debrief with (assuming here) a Developer and ask questions > Feed answers into next exploration session > Repeat


I think it’s important to know how a feature or fix was done but not necessarily down to the code lines.
So for example we had a code change fixing a date calculation issue. I tested the fix, the issue was gone - done. Problem was that it wasn’t some code only for this scenario that was changed, but our main date routine and the dev had messed that up. Had I know the fix was done by touching the date routine I would have planned for a regression test on all things concerning dates. The debacle lead to the rule that testers need to be given such information to properly test. It had to come to this for us the get the info, but at least now it’s fine. Although the team is still working on fixing many fundamental problems from the “too technical for you” phase. What we usually do is grey box testing, just enough knowledge of how things work to make the best test cases that we can.

I can’t imagine a scenario where having this knowledge would have been detrimental to our tests. We never use it to rule tests out “as nothing was touched”.


I’d run a self assessment about which testers are involved and how your company works. For me, implementation details have been vital to my success as a tester. I say that because my various seniorities of Quality Engineer, have required I have domain knowledge about the tool and discuss/explain that knowledge to others.

Imagine your company manages an inventory app for grocery stores. One feature you’ve worked on is the ability to scan a UPC and get all of the locations in the store. Sales is working on a whale grocery chain and is doing discovery. You, knowing the implementation details can surface something like “well, end caps aren’t implemented the same way as shelves; if we want to do this for Client X, we have to address that or do a big lift”. You’ve spotted a risk well ahead of the actual code development just by having that information to share.

As always, theres a risk reward. Ideally, the code/tests are going to give you those details without you having to hold them in your head, but your company might not have a human readable test suite making it easy for Product owners to get those answers themselves. Further, you knowing more about implementations might pigeonhole your thinking and cause you to not test other areas because you know too much about this one area to remember the small other places the code touches.

So, to reiterate, do a self assessment. How easy is it for business teams in your org to get answers to those risks? How much are you allowed to learn about the implementation details? How senior are you relative to who you talk to? Are there bugs/risks you might’ve flagged had you known a detail, or had you been able to share something you knew ahead of time? Lastly, do you just want to know? Its ok to be curious. Your growth is also of value.


Are we also talking with respect to how something is implemented at high level (component architecture) or you mean the low level little gritty details in code (whether for the whole system or a particular component in a complex system)? There’s also a difference in that, and both can be considered understanding how something is implemented, just a matter of at what level of depth.


Somewhat related but also bit off topic, these may also serve as examples of where some sort of understanding of the system in a more developer centric way can be helpful for test automation:

1 Like