Testing As An External Team - Need Opinions

So my company wants to create a model where the QE team is viewed as an external entity, almost similar to an outsource model but it is still all in the company.

The idea here is that they do not want to scrum team to have a dedicated QE person(s) and keep communication limited between the devs and the qe folks, so that the ones testing are not influenced by the developers. They would be expected to only use product specs to validate the product and the expectation would be that there are very few, if any defects found.

In some ways, I understand the thought here and at a high level I can see the appeal of it.
However I do not feel this is a good way to scale and get coverage. I feel a more integrated model is what teams need and that QE folks would not be easily influenced on how or what to test as this is their speciality. I very much prefer a collaborative model and I feel over time builds a much more solid product.

So I am looking for anyone that has any experience in this and your thoughts on it all. I just want to gain some more outside perspective.


1 Like

First thought:

Maybe you should take a look into “The book of five rings” by Miyamoto Musashi. In way-too-short, Musashi is a duelist in very-old Japan who was famous for winning duels. He wrote this book to explain how it happened. The way he did it was by using all the tools at his disposal in order to win.

Second though:
OK, the Book Of Five Rings may not apply to your context, so I would suggest this series of blog posts (as a starter! This rabbit hole goes deep…). Make sure you read the whole series.

The {person who matters} may not go farther than the 2-axis diagram, but this may be all they need.
The way you know that things are wrong in your application relies more than just on a product specification. A LOT more. By splitting the teams as I interpret your message, then you would lose “Conference” from your oracles. By removing the design documents, you would lose a big portion of “Reference”.

Could you work this way? … Probably.
Would it be better if you were one unified team? … Almost certainly.


I’m going to be rather blunt here: your organization’s managers are living in a dream world of happy unicorns.

Unless your devs are working on a new product with completely clear, unambiguous requirements (which almost never happens), there will be places where their understanding of what is supposed to happen will not be the same as your understanding of what is supposed to happen. This in turn may or may not match the customer’s expectation of what should happen. Hence, there will be defects.

If testers are not embedded or working closely with the dev team(s), those defects will be effectively “thrown over the wall” to devs who may well be working in a completely different area of the code, which will cause rework and scheduling issues.

Being influenced on what and how to test is a very small risk compared to a critical misunderstanding causing months of rework (something I’ve found can be surprisingly common). I’ve personally found glaring mistakes in product specs which make software built to those specs unworkable. I don’t see anything in your company’s proposed model to deal with this risk.

I think that you are correct to worry about this. It will turn a scrum/agile process into a waterfall process, with all the attendant risks involved. The whole point about having test specialists involved early is so that everyone is talking to each other and has a better idea of where the risks are with the product.

I’d recommend some research into costs of rework and defects under agile models vs costs of rework and defects under waterfall models. If you can’t change your bosses’ minds, I’d strongly recommend keeping track of time spent in rework and defect management, so you can give your bosses as accurate an estimate of costs of their preferred method as possible.


I once worked on a project like that; not only did we work with remote devs, but the project specs were drawn up by consultants and the person who had commissioned the consultants had left the company.

We worked to a sort of agile process, whereby deliverable product was thrown over the wall once a month. We had weekly telephone conferences with one of the devs to decide bug triage issues and not much else. There was no discussion of “what is this bit supposed to do?” In the end, we made it work by various devious ways, though there was a lot of foot-dragging by the devs. But in the end, they delivered the last sprint, it all fitted together, we did integration testing and demo’d it to senior management. The CEO declared it to be “the best-tested software we’ve ever had.”

Then it got rolled out to beta test - except that the marketing people were going around, bigging the app up, and the beta test very quickly became a rollout across our whole customer base.

Then it all fell apart as we found:

  • Customers wanted to use the app in ways we had never anticipated
  • No-one had specified which legacy systems the app was supposed to interface with to ensure that data hand-off to the legacy systems would work
  • No-one had looked at the changes required to the legacy systems to accommodate new data types and features that the new app was offering clients for the first time.

One of the legacy systems that the new app wouldn’t hand off data to was the invoicing system. At one point we had about £3 million worth of invoices tied up in limbo between the app outbox and the invoicing system. You can guess how that went down with senior management.

At the very least, I would recommend thinking very carefully about this idea, and I endorse Kate’s ideas in the previous post. Have management said what possible advantages they can see from conducting all their testing from inside what effectively sounds like a locked room?


I have to say my immediate thought on viewing this is that your management are preparing the ground to get rid of in-house QA and outsource - yes, I’m a cynic but I’m old and battle-scarred enough from corporate politicking not to take a statement like this at face value but instead ask “what’s the actual as opposed to the stated motivation for this?”.
I wholeheartedly agree that it’s a major step backwards into waterfall; any good tester will stand their ground against a developer if they don’t think something in delivered code is right, and will escalate to the product owner if needs be to make the call on what’s right. Software development is a lot more efficient and pleasant for all involved when the various roles are working closely together and communicating as much as possible; this is the major driver for ditching the siloised methods of the past and embracing the agile/devops ways of working.


Thanks everyone for so much insight and thoughts on this. Its great to know I am not off on my thinking of this and this validation is great for me.

So to follow up on a few select comments:
@brian_seg Thanks so much for the links. I am looking forward to reading the blog posts but your thoughts very much align with my concerns as well.

@katepaulk This right here is exactly what I have been thinking and proves that in theory it sounds good but reality is the proof.

Unless your devs are working on a new product with completely clear, unambiguous requirements (which almost never happens), there will be places where their understanding of what is supposed to happen will not be the same as your understanding of what is supposed to happen. This in turn may or may not match the customer’s expectation of what should happen. Hence, there will be defects.

@robertday I appreciate the story. At first I thought it was going to be about how successful it was but then the ending told to story and I think aligns with the thinking from everyone here. Management has shared what the intent here is and that we, as QE, should be able to use pure docs and specs to validate the software and that it not only works as intended but with a high degree of quality, ie. no major or obvious defects. If there are defects then the dev teams will be called out and it will go back up the chain to the product people because they defined the specs.
We are not mature enough as a company or in our processes to do this IMO.

@professorwoozle I can see why you would be skeptical and your battle scars would likely back that up. However the company actually recently changed from an outsourced model to bring things in-house. So the move back to outsource wont happen.
However I agree being closer and more involved is the way to go and the opposite of agile.

Overall I feel I am correct in my feelings on this approach. At the least I may look at some hybrid model maybe to cover my bases.
Having the QE team being an integrated member of the dev teams is what we should be working towards. I want to drive collaboration and ask the questions that only QE would have the lens for so everyone is clear and aligned on what is being built and delivered.

Thanks again everyone for the input so far. This is so great :heart:

1 Like

The cost to fix a defect goes up exponentially as time goes by.

(How quickly the dev team can throw bricks over a wall, is really limited by how many bricks they have to throw. I am assuming there is only one product. Eventually someone will ask about what productive work gets done in the dead time while we wait for a brick to come back?)

Miyamoto Musashi looks like an interesting case study to learn about, and …yeah, welcome back to the club Stevie . :slight_smile:

1 Like

Exactly. I feel that our QE’s can find things to fill the void by adding in test case creation, automation, and general exploratory testing and the like.
However, as we all know the further down the line a defects gets, the more expensive it is to fix. So I want the QE’s to be on top of things and asking some of these questions before any code is cut. Even if it is just for a better understanding of the solution, so they can then get started on doing up test cases and UAT’s.

We have many products that make up somewhat a single point of entry into our product and services. So under the covers there would be lots of products that are owned independently and operate as independent entities. So the thought of a single brick being so key that it holds up the whole delivery is a bad bad thought.
Also in some cases there are external factors that I feel our Product and Devs just may not account for that QE may be able to highlight earlier on and get discussions going early on so to avoid issues down the line.

1 Like

If your products are interacting with any third party, you’re going to need to highlight this - my rule of thumb for testing interactions with a third party system was to start by doubling what I thought it would take and assume that this was the conservative estimate.

Similarly, with so many interacting products, your developers will - through no fault of their own - tend to have a deep focus into whatever they’re working with. Testers tend to have a broader, less deep awareness and be more tuned in to how the different pieces interact. Both perspectives are necessary to produce high quality software.


You are so right @katepaulk !
That has been my argument as well as my experience in my career when it comes to working with developers.

As testers, we look at things a bit differently. We can deep dive if we need to but generally try to have a higher level perspective lens on things.
Most often when I come across issues and then have conversations with the developer that did the work. It comes out that they never thought of the prospect that what I did could be a scenario that would happen. Like you say, this is no fault of their own and they need to have the focus on the product they are delivering. This is where we as quality practitioners come in and apply our trade.

Glad to hear that you’re not in imminent danger of getting offshored Stevie! If your management are so keen on specs and docs as a source of truth, ask them who’s tested that documentation to make sure it is indeed accurate, comprehensive and unambiguous - I’d bet that the thought of subjecting it to test before using it hasn’t crossed their minds, given they want to siloise development and testing.

I can’t imagine not having a close working relationship with the developers. They are not my enemy. I am not their enemy. We do need to work together and protect the business as best we can.

Totally agree @darth_piriteze
I have been in both situations and much much MUCH prefer working hand in hand with my devs over working in a silo. I have also been working very hard to ensure the image of QE is not to find flaw in the work the developers have done and that this is part of the process of software development.
We win as a team and will also fail as a team.

Thanks for the call-out, Stevie. It might not have been so bad if the consultants had done their job well; but they didn’t. It looked as though they had never gone out to any of the clients to ask what they actually wanted from the sort of app the company was thinking of. And it ended badly for most of us, which is why I’m not with that company any more.

I second Brian’s recommendation of Musashi’s Book of Five Rings. I wrote a set of blogs based on the principles of strategy in that book, which you may find of interest: https://probetesting700171536.wordpress.com/2018/10/02/the-way-of-the-tester/

1 Like

Thanks for this blog post @robertday Very to the point but has some great things to consider and take away.
I am very much dedicated to showing my company the right way I feel we need to move to do things the best we can. However I need to change the mindset of others first.
This post is about getting some outside perspective from those that have lived it as well to ensure my experience is not an outlier.

Thanks again :slight_smile:

I recommend the book Team Topologies it gives much better guidance and research on team structures. It sounds like this model would be expecting to use testing as a service and that’s going to be a bottleneck. I imagine with enough money anything could be possible.

1 Like