Software development teams without dedicated QAs? How impactful can it be?

Hey guys. This is one of my favorite topics in the testing area nowadays. I recently read one article from Gergely Orosz about How Big Tech does Quality Assurance and many reflections came to my mind.

I have seen more and more companies going for an approach where the dev teams take the QA role. It is important to highlight the word “role”, since even though there is no QA professional inside the team, the process and activities to guarantee the quality are not neglected. The whole dev team has a good quality mindset and takes over the quality and testing role by testing the features and developing automated tests.

In some cases, QA specialists still work inside the company, but not in dedicated teams, where they preferably work in a platform engineering team that will create tools, strategies, and processes that numerous stream-aligned teams can use. That follows the practices of Team topologies book.

The point that most Big techs have been pursuing and going in this direction is also very interesting from my point of view. Should this approach work better for large organizations, or should it be just coincidence?

From the stats shown in the article, it is also possible to see how this change influenced in the number of unit and integration tests. When there was a role distinction, most of the tests were concentrated on the E2E level, which generally is where most of the QA professionals focus their effort, but also, unfortunately, leads to an inverted pyramid with slow and flaky tests. After this change in the team structure and with devs taking full responsibility for the testing and quality process, we can see the a huge difference in the number of unit and integration tests, which are faster and easier to maintain and can detect bugs earlier in the software development lifecycle.

So, what is your point of view about that topic? Have you seen many other companies working in this format? At the end, does it help to improve the team releases and the quality ownership inside the team?

2 Likes

I think this is the death of testing. Code writers writing code that performs simplistic checks on other code written by other coder writers without a question raised, a philosopher read, or a concern in their heart.

Hang the Navigator

Death to the watchmen, lazy and spare,
Walking from bow to the stern to the bow,
Peering around with their lecherous stares
And brashly and coarsely their warnings avow.

Death to the sailors who plot every course,
Plying their magic, consulting their charts,
Demanding their efforts we all must endorse;
Dripping the poison of doubt in our hearts.

Death to the lookouts, a waste of our loot,
Men without skill breeding fear and dread.
Save our doubloons with a cheaper recruit,
The job could be done by the parrot instead.

Death to the death of adventure and play.
Our crusade for plunder they dare contradict.
Cautions, admonishments, slowing our way
On the path to the dangers I could not predict.

And if the ship ever should be sacrificed,
All its souls lost and the hull run aground,
Don’t fear its loss as we’re paying the price,
as I have the vessel insured to the pound.

1 Like

I see more companies going for an approach where dev teams take the QA role the quality isn’t neglected.
The point that most Big techs have been pursuing and going in this direction is also very interesting from my point of view. Should this approach work better for large organizations, or should it be just coincidence?

I believe that this approach — making teams truly crossfunctional and responsible for the quality of the product — should work for any organisation.

Firstly, reducing information loss is a significant advantage. When QA role is shared within the dev team, communication becomes more straightforward and direct, everyone holds the knowledge, which leads to less loss of context during handoffs. There’s good research supporting the thesis that co-located teams perform better due to the improved flow of information. See this research for reference.

Secondly, crossfunctional teams mean fewer wait times occur because there are fewer handoffs between distinct departments. This principle aligns with Lean methodologies, which aim to eliminate waste in any form, including time.

Thirdly, crossfunctional teams reduce in-group favoritism and out-group prejudice. This psychological principle often plays out in organizational settings and can hinder effective collaboration. By integrating teams, we naturally create a larger “in-group”, which should theoretically lessen these biases and improve cooperation.

So, what is your point of view about that topic? Have you seen many other companies working in this format? At the end, does it help to improve the team releases and the quality ownership inside the team?

I believe that quality should be everyone’s responsibility, and this sentiment is shared by experts like Ackoff and Deming. This philosophy is not only ethically sound but also has practical implications for software quality: when everyone on a team takes ownership of quality, there is a better chance of creating a holistic view of quality for everyone, which extends beyond merely finding and fixing bugs to improving processes, communication, and overall product value.

I’ve seen many companies doing this, and I’ve seen great QA experts teaching team members everything they know so that the knowledge is truly shared.

3 Likes

I only read up to the Microsoft part.
I think only Big Tech does this (at the moment) because they can afford to.

Here are my thoughts on why
Big Tech doesn’t need to worry about time to market as much as a Start-up company does.
Big Tech can afford to hire the best Devs.
Big Tech can weather a bad PR from bugs
The software might not be mission critical (in fact, the article mentioned the Windows division still have QA)

2 Likes

I’ve been digging a bit deeper into this to help teams and customers decide if they need a professional tester available to the team. There were some key factors that came into the discussions.

If you are doing development as a service building a product on behalf of someone else we found a few of customers had an unrealistic bias towards the idea of we are paying the developers to build the product correctly, why pay extra to cover for mistakes. It misses some fundamental understanding of software development in general unfortunately but in some cases they could be right.

The following factors though generally need to be considered together.

The biggest factor tends to be risk, is your product low risk, do you already know everything about the risks of your product, is it fairly basic and contained. On the other hand if there is a lot of risk and unknowns your going to want someone who can cover those.

Team model and ownership of quality. We generally have a fairly holistic model where the developers do good testing but we also have designers, product owners, customers and if needed testers also testing. Developers having strong ownership of quality, solid automated tests they create at east at unit and api layer really makes sense and remains to most efficient way of developing software in my experience.

The difference between user acceptance tester and professional testers should be fairly significant, the former will uncover easy to find issues whereas the latter often tool up to go deeper and discover more risks and more about those risks.

If the risks are low and the team has good ownership of quality and testing then a team may not need a pro tester.

I still though generally recommend a pro tester is available to each team even if the above factors make a much reduced need compared with some older models, they almost always find something extra missed when they are not involved. Often as a result I’ll be the solo pro tester on the team with maybe a 20% allocation so can handle around 4 different project teams needs at the same time.

There is some lessons in history but I’ll post that separately.

1 Like

My take on the history part of this, you will be able to argue against each statement in different contexts but it remains my take.

Back in the early 90’s a lot of teams did not involve separate testers at all.

Products grew, became more complex, were made for larger user bases alongside at least some basic user device fragmentation and they started bringing in dedicated teams.

Those teams often focused on having a very verification bias with lots of test cases. Developers went woohoo not my job any more here’s some more code thrown over the wall, the whole absurd idea of developer tester conflict arose.

At some point teams saw a massive amount of waste in that model, early 2000’s those test cases moved to being automated, it still carried a lot of waste but risks were also increasing so views on testing also needed to change to be more risk focused.

Some companies are just coming to their own lifecycle that matches that of the early 2000’s and they may not have experienced the 90’s model at all.

Remove testers completely and you go back to the 90’s model that we learned then was not enough.

My view is the model needs to take the best elements of the developer ownership of quality and testing but also leverage from a more evolved usage of testers as part of the team, high levels of collaboration and a bias towards discovery, exploration and investigation.

2 Likes

Arguments about whether you need a specialist or not are what this comes down to. A team that does not need many specialists, is probably doing sausage machine software development anyway, and probably don’t have security experts in individual product areas either. It has always been a context game.

I agree with @kinofrost, it is the death of testing, and in my books shareholders and CEO’s are welcome to structure and take risks as they see fit. They do not owe me a job in my specialization. If a owner wants to sip pina-coladas on a sunny beach and not do the hard work, that’s their funeral, not mine. I find it so funny because this is always a moot argument, and as @andrewkelly2555 points out, this is where majority of us started out, in the 90’s software QA was uncharted territory. It was like a wave after that with things like ISTQB and ISO jumping on the bandwagon with their specialist industry “comfort products”. Even if you do have dedicated QA resource, its a mountain to stay on top of, and not having a QA drive inputs into a defect injection system probably makes enough cost savings for businesses to deliver if they spend the smart.
First they mock you, then they attack you, then they pretend they always thought you were right ~ Ghandi

2 Likes

This is the long and short of it. Testers have been doing chores for developers in a developers’ world under developers’ methodologies and have never really proven themselves at scale as the skilled scientists they are. The evolution towards zero-tester teams is a natural one to select where testers are expensive, disposable check robots who complain and slow things down all the time. Let’s build a checking machine that we can ignore efficiently and provide the sense of stability and certainty through mathematics and our intimate, sexual relationship with algebra and formal logic provides that the social scientists never could. Another non-human to blame when it all goes wrong. It’s guaranteed to work, and when it doesn’t then that’s just the way logical certainty is some times - who could have possibly predicted this? If only there were some human skilled enough at finding and reporting such risks.

Companies have decided that as good testers aren’t available or are very expensive, because the industry churns out mostly tool developers and test case operators, that testing can be replaced with automation and folded into a development role, despite the conflict of interest/focus/mindset/purpose that introduces and a few impossible leaps of logic and some abstractions leakier than a Welsh soup. But those failures and the resultant placebo don’t seem to result in losses for the company - perhaps quality isn’t as cost-effective as it used to be. The homeopathic approach works best in a world of spoon-fed mercury.

More powerful monopolies, the expense of managing people at scale, caring less about the product, and of course the new mechanisms of e-commerce. Companies no longer rely on you to be a customer, they create a honey trap to collect the lifeblood of the advertising industry, user data. Free software oxygenates it, data brokers pump it around the system where it’s filtered out by advertisers and the hot targeted sales are urinated into the face of humanity. This explains the salty taste in the mouth when you go online these days, and has serious implications for the epistemic risk gap, and the need for skilled testers to bridge it. Simply increase testability and reduce costs by not caring. As @conrad.connected says there’s no owed job, it has always been up to the testing industry to create a space for itself, but testing is hard and industry standards and certifications support and contribute to the elimination of proper testing at a rate that exceeds our influence to stop it.

So that’s why I write poetry now.

2 Likes

Watch out, you might be nominated as a resident poet laureate - you will be paid in advertising revenues only of course

2 Likes

For teams pushing changes to production multiple times per day, with comprehensive monitoring and alerting, with developers adding appropriate test coverage, and with canary releases and feature toggles, it’s hard to justify having a dedicated tester if you look at the reality of how that role is given shape in most companies, i.e. writing comprehensive UI-level tests in isolation from what developers are automating and manual checks that don’t go beyond rote verification of acceptance criteria and regression scenarios and are done in isolation and at the end of the process.

For those teams it’s a complete no-brainer to nix dedicated testing roles.

But they do miss something. The risk of releasing something that’s functionally broken can become very small, because these days we (or at least web teams) have so many tools and techniques to help identify and predict issues quickly and significantly reduce their potential impact and recover from them before customers even notice. But they might miss a voice that’s able to have a big picture view of the technology and the product and the business and the SDLC and the user and use that voice to challenge (and concretely improve upon) whether a piece of work is really worth doing, whether a chosen implementation is fit for purpose, whether the specifications are complete yet small enough, whether the tests are valuable, whether the ways of working are effective, whether the pipeline is fast and reliable enough, whether all important risks are being covered, whether monitoring tools are configured well, whether teams get direct customer feedback when and where they need it, and so on.

I suppose that person might just as well be an Engineering Manager or a Tech Lead or an Agile Coach or whatever rather than a Quality Engineer, but then they’d have to be infected with a massive testing and quality parasite that lives in their brain and manipulates them to be hands-on with the team in reviewing tests, having lots of questions when upcoming work is discussed, maybe doing pair testing, and improving the team’s pipeline, observability, and ways of working and releasing. My point is there’s a tester-shaped space there.

It’s just… really hard to find people who can fill that space well. It’s also hard to measure the impact of a role that doesn’t express itself through code commits or bug tickets raised. It’s also hard to find companies who understand testing and quality involves more than having tests in your pipeline. Therefore we’ll continue to have companies that don’t use testers optimally, and companies that don’t use testers.

2 Likes

Here’s my 2cents:

It’s a nice article but you have to look at the bigger picture why they are doing it.
Big Tech companies can afford big developers who write unit tests and neglect testers because of a single reason: Microservices Architecture.

If you look at Spotify a few years ago, every single piece of the app is in 1 developers corner basically. Their project is so small, let’s say the friend list is part of team A. It’s such a small piece of the whole package, you obviously have enough with 1 developer and not an SDET extra because as the article states, it does take up extra time and the ping pong is annoying.

Hence why developer A from feature A, develops and writes tests for feature A himself.
Of course some developers will own more then 1 feature but the point is that they are so small they can afford it to not put an SDET onto it.

Their release cycle is so quick that when a bug appears they can instantly fix it. (CI/CD/CD).
So yea it’s normal that they remove SDET’s because it makes no sense to have them for such small features of their product.

Team Combo (until 2014):

  • 12x SDEs (software development engineers)
  • 6x SDETs (software development engineer in test)
  • 2x PMs (product managers)
  • 1x EM (engineering manager)
  • 1x SDET lead

This might seem like a large team but technically: This is the team of Website A.
And every dev has his own corner to defend here. That’s how big tech companies work and the SDETs are all over the place.


Is this the death of QA? Hell no :stuck_out_tongue: only at Big Tech companies. As mentioned above by some of you, smaller start up / scale up companies cannot afford this.

2 Likes

Developer here. I’ve written lots of my own tests and worked in a number of small companies with different testing setups.

I see a few issues with the big tech approach for us mortals similar to @mtest’s comment.

Testing in Prod …

Using fast responses to bugs in production feels like a pretty bad way to run testing (esp. when you have a larger budget). Like @kristof points out, maybe because the project is so small it doesn’t matter as much. But, if we can avoid having a bad experience for customers, why not take that path instead.

…only works with big user bases

I think big tech also has the benefit of ginormous user bases. This means you can release to 0.01% of your users and still get some “nice coverage” on your code within a day (similar to @wilcovanesch’s point). If it totally fails for half of that (0.005% of users), no big deal, roll it back, fix it, re-deploy. And we see this - spotify randomly will have issues and they’re like “sorry, come back in 2h” because the ops metrics have caught the issue and they’ll fix it in prod.

For most of us, we’d have to roll out to nearly 75% of our user base for a few weeks before we’d get that sort of coverage. If it failed for half of our users (37.5% of users), we’d be in hot water!!

Devs often have similar blindspots

Having devs testing code has pitfalls because devs on the same team often have similar blind spots. We all took the same CompSci courses and read the same articles so when it comes to implementing a feature we often have the same ideas and overlook the same things. That’s why even a small amount of non-developer testing is super helpful (at least to me).

:+1: for cross functional teams

I think there’s benefit to putting all the people into small teams. Having testers close to devs, designers and PMs reduces the amount of mis-reporting and mis-communication. I agree with @conrad.connected that no one is “owed a job”, but I think it would really be helpful to have dedicated testers on small teams. I think we’d all be better for it.

But, perhaps to @andrewkelly2555’s point, it’s a pendulum cycle where we’re going from no testing (90s) → dedicated (2000s) → little testing (2020s) → …

2 Likes