Do you think developers should be responsible for testing their own code?

Full disclosure, I asked ChatGPT “list 10 controversial conversation starts for software testers”, and this was number two on the list:

Do you think developers should be responsible for testing their own code? What are the potential risks and benefits?

I think this is a fascinating question, because there are a number of myths it alludes to. One is that “Devs shouldn’t test their own code, it’s like marking your homework”. This is wrong on so many levels, not least because before handing in work to be marked, it’s very good exam or assignment technique to proof read, and critically assess your own work.

I think the risks of developers not testing their own code, is that they will very easily build the wrong thing, or miss many classes of bugs they could have designed out early, leading to very expensive and repetitive cycles of bug fixing.

But hang on, do any developers really not test their own code, AT ALL? I seriously doubt it, although they may have different names for it, and they likely don’t formally record it.

So, I’ve got some better questions, and some opinions from myself:

  1. Should the primary developer for some code be the only one to test it?

I think this comes down to experience and risk. Is the developer experienced testing their own code, and overcoming the obvious blindnesses to work you’re close with? What is the risk, if the code is released unchecked, and makes its way to production?

It maybe the developer is experienced, or even if they are not, the code maybe in a low risk environment, where rollbacks are trivial or downtime is not important. Maybe automated checks are also very strong.

Realistically, it is a very good idea to get the code tested by some other humans, although this might also be done through pairing or ensemble practices, rather then after the fact.

  1. Must all code be tested by someone else before it is deployed?

Again, this comes down to risk, it is a jolly good idea most of the time.

  1. Can developers learn how to test their own code better, such that we reduce the need for external testing

100% I know this to be true, I’ve worked with developers who have build up testing skills and do effective testing on their own code. This becomes even more powerful when used with pairing or working in an ensemble, either with other developers or with skilled testers.

So, to come back to the question, do I think developers should be responsible for testing their own code?

I think developers should be testing their own code, I think the responsibility for the code, and it’s quality, should be shared across the team, ideally a team that includes at least one person who is highly skilled in testing.


What if I assert that, product owners are ultimately responsible, does that change the thorny question the LLM bot found?

Context is very powerful, when we say “test their own code” do we mean ‘unit’ AND ‘system’ test, or just the former? So for me, problem-ownership not responsibility in any system needs to be delegated downwards as far possible to reduce micro-management. Doing so builds trust, we know this implicitly, and trust is often a key driver in teams productivity levels. The fact we cannot measure trust in the same ways we measure code-coverage as merely a function of branch complexity cover is perhaps a powerful sign or tell.

Do developers test their own code? Sure they do, if they did not they would be coming into the office every morning to lots of tiny bugs, so of course they try to test. Although sometimes doing that test well is harder to do than it needs to be. Often the product is architected or configured in a way that prevents testing using tools the developers are capable of wielding. Just today I tried to fill in a web form for example, the form is terrible, and I cannot believe that it was even touched by a tester, but I’m pretty sure they tested it, even if it looks terrible and is impossible to submit their form unless you know exactly how to fill it in according to their defined rules. And for me, that’s the big reason to not let coders be the only people who test the code, and probably always will be my view on coder pain. Not responsible, trusted, but not responsible. In my mind, the ultimate responsibility boils upwards, not down though, to the named product owner.

1 Like

I think that yes, developers should test their own code. And by that I mean test, not check (aka unit tests)

Context: My team is building applications that are installed on customer PCs, there are no quick rollbacks and pushing a fix out to customers takes days.

My definition of a minimal “developer test” is:

Executing the changed features in a product artifact that was built by a CI system.

This reduces the risk of overlooking required changes to the build process after a feature change (running the application from the IDE always works), it forces the developer to interact with the feature in a way a customer would (no shortcuts via clever debugging tricks allowed) and having to start the application outside the IDE is a clear indicator that it’s time to get away from the “developer” mindset into a “tester” mindset.
The minimal goal for the developer tests that I give to the developers is “it takes me more than three clicks to find a problem after you tested it”. That may be setting the bar low but it gives them a clear indicator when to expect complaints from me and also makes it clear that doing the absolute minimum of testing is not sufficient.
And I will definitely complain if it takes me three clicks or less to find an issue - then we will analyze why they didn’t come up with this idea themselves and try to improve their knowledge.

Finally the developer tests are needed since we’re two testers for forty developers. Having us do all the testing doesn’t work and skipping tests completely is too risky in our context.


Simply yes. Shift left and all that.
His unit tests should be tested with Mutation Tests and when deployed to DEV from his local environment or even QA, he should IMHO still go through the flow to see if everything works.

I’m not expecting developers to do a full regression but at least check if all the acceptance criteria are okay.


I might be on my own here by thinking this isn’t that controversial?
I’d suggest that in a lot of situations developers are responsible for testing their own code. If they are in a situation where that isn’t the case I’d be concerned about overall quality. My reason for saying that is I can see this is a throw it over the wall situation. Never good.
There are many ways devs are testing their code as they create. Unit tests, debugging etc.

1 Like

I wonder if I asked the same question 10 years ago, or to a group of developers, if it would be more controversial. :slight_smile:

Good point, although I’d say there would be a split between those who were working in agile teams and those in silo teams. Thinking about it, that might be the case now :thinking:

1 Like

I often fall into the trap of thinking the way I’m currently working is the normal for everyone. I expect plenty of companies still have external test teams. Not to mention consultancies and agencies.

1 Like

Just a few points to add

I’m with Andy, I don’t think there’s any other way to check code works without testing it. So a certain level of testing is a given. I’d be very worried about a developer who just writes code and doesn’t execute it, or debug anything. Devs can get in a habit of copying and pasting code also sometimes they don’t actually understand that code as well. Which can lead to the dev not being the best person to test their own. All red flags

Think issue is often more, the level of test coverage, what level of testing they stop at, maybe their experience also and the amount of code - maybe they’ve built or written too much code to be able to single handedly test it on their own. Or they are new to the framework being used.

Also Sometimes the devs don’t have the domain knowledge to test their change thoroughly enough. eg apis, the devs are happy to check their endpoint can be hit and transforms some data but they don’t have enough knowledge about the systems / Databases / Data etc integrating with that api to do the more thorough test.

I do see some devs tend to focus purely on their changes, rather than the bigger picture, so their testing is limited.

Good devs do test a lot though and push to get feedback on their work / decisions.


We have the practice once developer completed the development , we are doing the development test as peer testing that will reduce the issues. If we ask to test the code by written person that won’t be more effective way.


Something else that will help is demo’s.

Whenever a developer has created a piece of the user story which is available for a demo, they show it and people can already give feedback eventually when it’s finished the devs will demo their work on the dev environment before it goes to QA.

Often these demo’s uphold the Law of Murphy and stuff will go wrong or people will say what if you do X or Y. Which is amazing and you’ll knock out some bugs before it hits the QA environment.

So in some sort of way this also is a developer testing their code, but with a few extra eyes on it :wink:


Yes, developers should test their own code. However, I would also add that in some situations it is better if code is tested collaboratively.


I think everyone should check their work first before giving it to someone else. Otherwise the other person’s time is wasted with obvious mistakes.
And, yes, there are devs that actually perform 0 check/tests on their code before giving their stuff to testers. They all seem to be working on my project :sweat_smile: It’s not like expect a full test suite to be done, but the main use case should work.

1 Like

I’ve seen this loads of times, and while it initially confused me, pairing with Devs I’ve realised how easy if is when building anything but the smallest feature to fix one bit that breaks everything, just before the last commit and miss some important tests.

Largely, with the understanding that there are exceptions to every rule:
Yes, devs should test their own code.
No they shouldn’t be the only ones doing so.
Yes, I will lose my s**t if code that doesn’t build, or pass the main test case, gets through development and a code review. :slight_smile:


I think this shows how dangerously weak ChatGPT can be. I’m a big advocate for using AI to benefit testing, speed things up, learn, get signposted etc etc but this is a ridiculous notion, even in the context of ‘give me a controversial conversation starter’

Of course developers should test their own code, like a plumber would test their own taps, or a electrician would test their own light switches. You’re telling me there are developers out there cutting code and never running it and leaving it to someone else? Absolutely no way.


An AI walks up to a bar counter
Barman: what can I get you?
AI :what is everyone else having?

I’m with @geoffd on this topic. Testing the code is not nearly as valuable to the business if the business does not know what it wants to ship in order to make even more money, fixation with blame and with code is perhaps our own rabbit-hole.

My personal view is that developers shouldn’t be the only people testing their own code. It’s very easy to test that your code does what you built it to do - a trap I’ve fallen into with my automation code more than once - and a lot harder to detach yourself from something you’ve made in order to find as many weak points as a user could plausibly find (and particularly difficult to make the jump from “user” to “malicious user out to wreck your stuff”).

When you add that to the fact that many developers tend to know what they work on deeply, but may not be familiar with the many different ways to access and interact with what they worked on, you can get… interesting results.

So, yeah. Devs should test their code. So should as many other people as is economically and logistically feasible.


Hard to add anything here but I’d say testers do test their own code. As they write it, checking that it compiles, correcting errors the IDE picked up. So what we’re really asking in what way and to what degree devs should test their own code, and that ties into where their blindspots and weaknesses are and what they can leverage from skilled testers.

Im in bed so the rest of this is just rambling but hopefully inspires thought in its weaknesses.

Bugs exist because of assumptions. We are powered by heuristic systems and incomplete models, then made confident enough to live by our faulty decisions by ego, so that we can get through the day without being paralyzed by microdecisions. If we can identify the nature of a dev’s assumptions perhaps we can find out how best to help them. It can be hard to examine the possibility of failure when you’ve written it to succeed, with limited resources of attention and a focus biased towards algorithmic thinking at the expense of the holistic. Devs should test in their own focused world, then kick the tires on the defocus so i dont have to immediately give it back when it falls over. The rest is probably dependent on context.