Developers doing more (all) testing

It’s quite a trend as I can, many companies are thinking about a “new” “better” way to improve the software dev processes and save money/cut budgets. The idea is to have fewer QA/Testers and let devs do more testing. QA testers would then focus on making sure everything is done right and planning the testing strategy.

How do you think this change will affect the quality of software and the efficiency of the software dev process? Is it really effective in terms of keeping high-quality standards?
What are the main challenges and benefits of having devs do more testing?
From a budgeting perspective, do you think reducing the number of QA professionals and having developers take on more testing tasks is a cost-effective approach?
How do you think this shift will affect the workload, productivity of developers, release frequency and feature quantity?

from my experience, devs are more expensive and might be less focused on testing, which could affect the quality. This could also reduce the number of releases and features, which is bad for business.

So what is your opinion? Have you noticed this tendency or have you been affected by this approach?

3 Likes

a sunk effort fallacy, asking coders to be their own policemen is going to require a lot of process rigour that many teams cannot afford. If your product is not mission-critical does not have to meet any standards body requirements, then sure.

And that is just my opinion, and hope this clears up where the costs of software in any compliance environments have to lie. Not with the coders, customers do not receive code, they receive packaged and integrated functionality in the environment of their choice. Every small company is trying out variations of this without incorporating their business constraints, which is probably a good experiment for them.

4 Likes

I am a developer on a team without testers. We do have a central testing team, who support our testing efforts if needed, but they also support about 6-10 other teams. I’d judge our quality overall pretty good. We do have incidents in production. They are not frequent and we are good at dealing with them.

I personally think it is not the smartest way in the world to not have any testers on the dev teams, but sometimes I wonder if we would have become that good at CI/CD, if we had more testers at our disposal.

Overall, I think making testers a sparse resource, can actually help to improve quality and make development way more efficient as devs have other means to ensure quality… but that’s an effect you get on the loooooong run!

3 Likes

This movement is similar to “Let’s make every tester - an automated one.”
While the general idea is not bad, this approach requires developers to be mature enough to understand the importance of testing and special training.
In the end, we will end up in a world where software engineers do business analysis, coding, testing, deploying, monitoring, operations, etc. However, it is hard to find an engineer who can do all such things with decent quality.
And the salary will definitely increase.

2 Likes

It doesn’t matter if you talk about a bridge (Tacoma Narrows), an aircraft (Comet) or software (banks, supermarkets, healthcare, self driving cars, etc.) things made by one human need to be tested by another.

Developers marking their own work, with no tester at the release gate, is a recipe for disaster and is deeply linked to senior leadership who have bought into test automation as a replacement for humans and the shift left movement.

3 Likes

If you look at things purely from an efficiency model, having testing close to the product code as much as possible makes sense, for me its by far the most efficient way of working particularly with automation at all layers.

I’ve worked with teams where there were no testers and the did a really good job so I know it can be done.

If your testers are just doing scripted testing, either automated or fairly rote’ly hands on its likely you will get more benefits with devs picking up a lot of this with a few exceptions.

The main exception is where your product has regression risk as the main risk, developers are already struggling with firefighting that risk so you would sacrifice the efficiency to share pass over the regression risk to a separate group, it creates more distance which is less efficient but does allow developers to focus more on product code.

The other important element can make it purely a question on risk, we often use what we would call hands-on, highly technical, risk based testers. Rather than following scripts these testers focus on doing deeper testing around product risk, so cover a lot more unknowns and generally have more specialisations in risk investigation than the developers.

When its made a risk discussion, if you are low risk and you feel the team already knows everything thing about those risks or you are only interested in known risks you can often pass the testing too developers. Its rare, exceptionally rare due to the nature of software develop building something new every time.

New things carry risk and unknowns, developers may be great but they are still human a fallibility risk in itself. Here almost always unless there is something else dysfunctional or you are only using testers in a limited way, a tester will add value to the team and product.

2 Likes

In my last role, the whole company was built on a foundation of Quality first. Which made the tech stack great and production issues minimal. We had 4 core values: We are customer obsessed, data driven, fast drivers, and team players. These 4 values lended themselves quite well to quality, as these were tenets that were held in high regard in the software development community there; things like being driven by data and enabling fast feedback in our work were written and spoken about a lot.

This meant that:

  • Our Quality Engineers championed and coached the team on quality matters.
  • People over process
  • Exploratory testing, threat modelling, testing toolbox bootcamps (QE’s were always the testing specialists)
  • Everyone on the team tests, checks, and validates
  • Everyone contributes to quality
  • The Quality engineer whilst hands on, also coaches and empowers everyone about testing methodologies, good practices, good experiances, working base don risk and value etc.
  • Quality Engineers helped the team(s) in whatever way they thought would bring the most value to the team and the team’s work.
  • Strongly promoted pair or ensemble/mob programming, including Quality Engineers so they can help/coach the team on quality and test scenarios during development
    There was literally so much, QEs were never excluded from meetings, Testing started at ideation/requirements and design, was continuous in shifting left and Right.

There was so much more than this for general overall Product Health that we looked at. There was a great sense of autonomy and support.

5 Likes

Where I work, we have teams without test/QA and teams that follow the traditional approach. In the teams, we have separate automation teams and test/QA people who automate, with a separate automation team owning the framework. We have products that are 20 years old and 1 year old.
Here is the interesting thing: it all works.
Or maybe I should say it works in a fashion. I don’t believe that there is an entirely correct answer. I think you have to decide what ‘good’ looks like and then aim for that. When you miss, you course correct. But you have to be open to hearing what is wrong and what the solution might be.
Remember, what works today may not work tomorrow.

2 Likes

I work as a sole tester in a team of 8 devs. The general understanding we have is that the dev is responsible for making sure he covers all the use cases that were written in the document.
It is then up to the tester to further test the feature and report bugs. There is still room for some shift left by including the tester in the design and refinement phase.

3 Likes

I’ve seen this a lot in the industry. Teams cutting off on QA resources to cut corners and also asking QA’s to code.

In my opinion, it’s not a good practice. The team really needs an unbiased and practical tester who can think from the Business / Customer POV and test accordingly. Especially when the timelines are tight.

It’s asking too much of a developer to test his/her own code with an unbiased testing mindset. It could be possible when the time to market is relaxed and the timelines are not tight.

3 Likes

In continuation, there is a discussion on LI with some additional ideas and thoughts :sweat_smile: (check the comments too) Join the discussion there or share your thoughts in this thread because this post also may apart new ideas or encourage someone to share their experience :wink: :blush:

Pros and cons in a nutshell:

:heavy_check_mark: Pros:
1 - Make processes more efficient and speed up CI/CD cycles, fewer delays between development and testing.
2 - Developers gain testing skills, leading to a better understanding of potential issues and overall code quality improvement.
3 - Fewer dedicated QA testers lower costs (technically yes but no).
4 - Prevents costly fixes later on - smoother project timelines and fewer disruptions.
5 - Fewer dedicated QA testers - smaller teams - easier to manage the team and activities (technically yes but no).
6 - Devs usually have better tech skills and can perform complicated tests more efficiently.
7 - Easier to support testing framework.

:rotating_light: Cons:
1 - Developers may be overwhelmed, affecting their productivity and focus on development tasks.
2 - Lack of specialized skills and mindset of QA testers, leading to potential oversights and a decrease in software quality.
3 - Balancing coding and testing can lead to burnout.
4 - Developers might overlook broader system issues that QA testers would typically catch - tunnel vision.
5 - Higher salaries for developers because of more responsibilities and more skills.
6 - More difficult to find devs who would agree to do not only unit testing and some e2e but all types of testing.
7 - Devs spend up to 70% of their time on testing leading to fewer features, and less frequent releases; as a simple solution to hire more devs.

2 Likes

I don’t generalize and prefer to speak based on the context.
Here are some variables:

  • when does the dev do the testing: before coding, during coding, or after coding?
  • in what sort of timeline does he do the testing, dev time, or overall project time? (e.g. does that reduce the time to do other tasks from other roles?)
  • which level is the testing done at? mockups, dev. architecture/design, planning, unit, integration, external with scripts/tools, UI/visual?
  • with what scope do they test? to confirm known knowns or to find trouble?
  • what resources are they using to test? and what they might not be doing by focusing on testing.
  • what’s the testability of the product like? is it so limited that only they have access/knowledge to test pieces of the product?
  • why do they do it? did someone demand it, for fun, to follow a coding guide, or to be a team player…
  • what’s their responsibility towards the state of the quality of the product? Do they own the testing or is that someone else’s responsibility?
1 Like

Thank you for your questions. They raise valid points, even if they seem a bit unnecessary in this context :blush:
The topic at hand is a general idea that is quite popular. The narrative of having zero testers/QAs in a company, with devs doing all the testing, is actively pushed in many companies even without a proper understanding of QA :sweat_smile:

So, to generalize (because I’m not considering specific situations here, more like a mindset): everything is the same as if there were a dedicated team of QAs and testers, but in reality, there are none, and the devs only are responsible for all the testing. If not all, then to what extent would it be reasonable for devs to do testing? This is the idea behind this discussion (a general one, yeah), but it would be nice to find some general ideas and answers :wink: :blush:

2 Likes

I once worked at a company where I actively encouraged the developers in my team to write more unit and integration tests. We’d pair on coming up with test ideas for the integration tests. It was cool. They had the coding skills to implement them whereas I didn’t. So it leveraged our strengths.

The cool thing about that is I got to do more risk-based exploratory testing, to go deeper into exploring risks.

Had a dedicated tester role not been available, I wonder what would’ve happened to the developers and their propensity to take on various testing approaches – including risk-based & time-boxed exploratory testing sessions. :thinking:

2 Likes

Thanks for sharing your experience, Simon :blush: I have experience working in envs where I “motivated” devs to write unit and integration tests. I also set QA goals for the unit and some UI test coverage as part of our QA KPIs. I wrote test cases for functional tests that the developers automated. I conducted extensive manual testing, including regression, integration, exploratory, cross-browser and -platform testing, and penetration testing. I also wrote and fixed some automated tests. Despite the dev’s involvement in QA and assistance from the devs, we still needed more testers to ensure higher product quality, as we didn’t have enough resources for testing without sacrificing quality to meet the shipment schedule. I was the only QA engineer/tester there and before that devs did all the testing, and oh boy it wasn’t good when I joined the team… :sweat_smile:

1 Like

My hot take (especially for MoT) is that testers will cease to be a thing. Testers will start writing production code and developers will start testing. Everyone will be T-shaped.

1 Like

I remember those days in the 90’s.

It worked reasonably well, we new we were responsible for quality, had nobody to pass over to and coded and tested well.

What changed was that the products became more complex and the target audience grew exponentially, the increased fragmentation and usage risk alone meant that even though we were doing really good job at dev and testing, it was noticed it was not good enough and a level of specialisation in testing was needed.

The adjustment got a lot of things wrong, it went to far in my view in what was moved from the developers and also maybe contributed to the introduction of inefficiencies and dysfunctional models.

Testing evolved for many companies to address the downsides but we are still in my view seeing the adjustment to counter some of the things lost.

My own view it would be a shame to step backwards to that original T-shape, we also did design then too which is also something that separated.

Things have changed in terms of tools and tech but I suspect some of the original things that made the model stop working in the 90’s are still there.

It needs an adjustment in model in my view to get the best of how testing has evolved alongside some of the good things associated with developer testing responsibility.

2 Likes

Developers must take responsibility for their code working.

This is absent from many processes.

Testers are a safety net. Developers are dependent on them. This must change. The job security of testers is less important than the delivery of value.

Defect-heavy software is useless. Traditional testing does not fix bugs: it just makes high quality deliveries take longer.

Catching bugs before or as they are written is more important than slowing things down with lengthy rework cycles.

1 Like

In Issue 481 of MoT’s software testing news, I mentioned an excellent talk by @nat . It’s related to this topic and well worth watching!

This is my scenario. We had testers in each scrum team but now there’s just me as a quality coach.

Has it affected testing/quality, for sure. Especially as we didn’t phase it especially well. However can developers do an alright job of testing? Definitely.

I’ve worked with a range of testers and developers. Some of my developer colleagues are fantastic testers. Some dedicated testing colleagues weren’t. So when people say “developers can’t test”, I will dispute that.

I often hear “marking your own homework” but that is very solvable. We test immediately when a story is completed so the whole dev team hasn’t been involved. On the odd occasion where that isn’t the case, I can help.

Don’t get me wrong, I still very much see the merit in having dedicated testing specialists in teams. I might return to that myself one day. However so long as you go about it right, it can work to have devs own quality with a specialist loitering around prodding then in the right direction.

3 Likes