'Modern Testing' is the loss of testing as we know it?

Hi All

I have just followed the ‘modern testing’ course and its prompted quite a lot of thoughts but equally disagreement on my part with my own personal experience and the company i work for. I guess im just reaching out to see what peoples thought’s are on the future of dedicated/specialist test individuals and teams. Some high profile companies like yahoo quoted as removing testers altogether. Of course the world we work in and technology within it is always changing but i just cant (rightly or wrongly) foresee a point where a testers role would be diminished and passed off to the business or fully saturated into a developers role and responsibilities. Sure, we want to be LEAN and release change quicker but cutting specialist testers seems a step too far and a risky slope to go down for any business which i dont think defines ‘modern testing’ as a strategy or a whole. Whats peoples own experiences with this across different companies within the industry?



There are two parts to testing - observing, validating, and verifying WHILE the product is being built, and observing, validating, and verifying AFTER the product is built. Both are important. Unfortunately, the human-led analysis and testing post-development is considered as a time-consuming, costly exercise that should be minimized or even completely eradicated. This was not a serious consideration for the past 20+ years, but only recently after a section of developers started influencing that every testing CAN be automated and it’s enough, which is not true.

While doing unmindful repetition of test cases without due consideration of whether that particular test makes sense for the context is a big problem, stereotyping ALL repetitive testing post-development as a ‘manual regression’ is totally wrong and should be argued against.

So, whoever is trying to do this is splitting the post-development testing into two efforts - (a) manual regression and (b) ‘exploratory’ testing. And they want all the ‘manual regression’ to be automated and ‘exploratory’ testing to be quick so that the CI/CD pipeline speed is not affected.

Those who really know testing though can vouch that ‘exploratory’ testing and regression are intertwined. Regression happens when an area being explored again needs automated checks, and exploration happens when the existing regression tests are not sufficient to cover the context. This split as ‘regression’ and ‘exploration’ should be avoided and testing should be looked at as an wholesome effort led by humans (not machines).

All I mentioned above is in the spirit of upholding best Software Quality, and I am not against any specific method, tool, technique, or philosophy.


If this the Alan Page and Brent Jensen view, I have not taken the course but I am familiar with the principles and also some earlier “no tester” ideas.

That whole “no tester” discussion for me was a bit of a red herring, yep testing is still very much needed, we just shuffled the board a bit and gave our exploratory testers a different name.

The principles though I think are really good and I agree with a lot of it.

For example I am very in favour of developers doing as much testing as appropriate and doing the majority of coding based automation activities even at UI layer, for me I have seen this first hand as much more efficient than any other model many times. I pretty much feel developers are great at the known risk coverage and with coach can also cover unknowns even if I do not see that as their primary focus.

All of the principles in my view have merit and I’ve been in roles/projects where I can actively apply pretty much all of those and they have been better projects than others where my testing contribution was more limited.

The one thing I disagree on in my context is ‘testers do less testing’, well that really depends where you started from and you were doing a lot of low value testing before.

So for me, rather than diminish the value of the tester role it elevates it, I can do more valuable testing, I can do more of the testing I enjoy, the unknown risks, coaching, more customer involvement.

Using data is also an important aspect of this view, getting more involved in user feedback, logs and analytics is still part of testing, its just another risk we investigate.

I particularly like the idea of the testing role to accelerate the whole team forward.

It does impact a lot of testers though, my ratio is normally one tester to around ten developers and I still stick my hand up for extra interesting projects.


I think you will find you are not alone Matt. A number of companies have opted to drop the dedicated tester role inside teams, and even drop it as a external team function in favor of driving quality up to start before the very first line of code is ever written. It’s going to depend on your industry and depend on your stance on the processes used in your industry. LEAN has definitely emboldened a move towards quality being a thing everyone does all of the time.

Personally I have noticed that security has become a “by design” thing, that everyone has to do, and is increasingly filling up screenspace on people’s jira boards too. But we still have to have security professionals in every org, and we still have to have OPS teams, because of course we have Wardley Maps Wardley map - Wikipedia , which tell us that teams need to move at different speeds and in different directions (You need to look for Simons youtube talk on the maps). And as such, my tester role has rotated more towards the “Release” and to the “OPS” end of things than it would normally. So yes, I’ve seen the dedicated tester role become diminished, but not under threat.


First, I agree that testers should always be a priority to have as a part of a team. I also like what Venkat said about the two parts to testing. But I do see a need for a person like a Testing Advocate in these places where testers are being phased out. If there won’t be a dedicated tester on a project, then the testing should be done by others on the team. A testing advocate would help educate developers and others on testing best practices and techniques to help them test a successful product release. Help them help themselves, if you will. I actually started a video series on what a testing advocate might look like last week. Check it out, I started with this overview kind of lining out my questions, and I’d like to explore more with others who want to look at what the evolution of the role could become. Let me know if you’re interested in continuing the conversation!


Conrad, great to hear about you using Wardley Maps. It’s one of my favorite topics recently :slight_smile:

I used WM to explore this very challenge last year, here are my thoughts.


The intent of Principle 7 isn’t to say that teams don’t need dedicated testers - it’s highlighting that a team may reach a level of maturity in team-owned quality and user data analysis that they realize they don’t need dedicated testers anymore. That doesn’t mean the testers get fired - most often (IME), as teams grow in this direction, the testers move to other roles on the team.

Also worth pointing out that Security is one area where I think we will always need dedicated specialists.


Why is security special?

1 Like

There’s a lot of highly specific domain knowledge (some of that changes daily) in the security world. Every /team/ doesn’t need a security specialist, but at Microsoft for example, every developer goes through a huge amount of security training, and there’s a culture there of writing secure code (if only they could build that culture around quality!).

Despite all that goodness, cybersecurity, network analysis, and compliance related issues have, in my observations, necessitated the need for every company I know of to have dedicated security experts.


Thanks. So would be any specialty like performance. I guess it is a matter of bandwidth - specialists spend time focusing on mastering their techniques. It’s important to have them in the conversations. Making every programmer performance or security savvy does not seem to be feasible.


This is as true testing huge web based infosystems now as when testing huge CLI basedTelecom products back in the 90´s. Was a Tie Guy in between wanting stuff to be cheap, with high quality and delivered right on time. And knowing what you wrote in the OP helped me. Automation was easier (while not easy) having CLI+s, simulators and regexp bonanza, but still, 90% of the time you develop some SW you do it on something already in place, where happy hackers and super senior developers have earlier been fiddling with stuff. So new things pop up on the most unexpected places. And the amount of “platforms” and 3rd party stuff just seem to explode, and each time you go from 12.1 to 12.4 of something, well, running a reg test suite, manual or automatic is not enough.

What is the “modern testing course” btw?

1 Like

@mattskp I had the same reaction when reading about getting rid of testers was the way forward. In my mind this was just going to support the opinion of the devs on my team that testers aren’t needed and don’t provide any value. Ironically those are the same people that can’t test their own stuff properly nor code clear, safe and accessible. I know most articles/talks mean a change of role for testers and not just removing them, but headlines and slogans matter. We should be careful with them.

In my experience not every developer can be trained to the point that a dedicated tester is no longer needed. Most models have this ideal, motivated and talented team in mind that you might not have in reality. Maybe it’s a generational thing as well (most devs in my team are close to retirement). They never learned about testing and proper coding at uni. We still fight about why code reviews can’t be skipped.
It’s also important to realize that testing and developing are two different mindsets, even if the testing is in the form of automation. Switching mindsets is not easy if even possible. Our developers really struggle with proper testing during our exploratory testing team sessions.

@karentestsstuff I’m currently looking into Quality Coaching. Is there a difference to Testing Advacates or just a different term?


What are you basing this statement on?


I’d guess experience?

Page has been involved with testing for a while, notably being the first author on How we Test Software at Microsoft, co-authored the Modern Testing Principles, and is one of the hosts of the AB Testing podcast.

1 Like

Yes, I know that, but it seems such a sweeping statement to make about development teams in general.

1 Like

It seems like it’d be an accurate statement to make if you’ve seen it happen once or twice and the number of anecdotal stories of teams working this way suggests it’s definitely viable?

The team I’m embedded on now (fintech, trading API) operates this way, and I spend most of my time doing code reviews and nudging others to think about testing, cases they might have missed, etc, than doing more traditional test engineer tasks (e.g. explicit testing, writing automation, etc), and am essentially just another dev on the team (though my title is still SDET).

I tend to agree with the MT principles since they seem to work, and they have less friction than more traditional approaches (even though there’s a fair amount of evangelizing and culture shifting, especially among management who’s become accustomed to asking mundane or pedantic questions about number of test cases, pass/fail rates, etc).

I guess I didn’t read Page’s original statement as that sweeping, especially with the “a team may …” caveat in there?

1 Like

I’ve coached multiple teams across multiple companies to a state where they didn’t need dedicated testers, and I’ve talked to others who have done the same.

Again, every team is different, and every team can’t get to this state, but I’ve seen it happen enough to say that it’s not a baseless claim.


Oh, I’m not saying it’s a baseless claim. Far from it. In fact, it seems to me that what you’re saying is that in your experience, it’s possible for some teams to achieve that level of maturity. So, what would you say are the qualities that already need to be present in a team for that to happen?

1 Like

Honestly - it’s the other 6 principles. When we focus on improving the business overall, continuous improvement and reflection - and probably most importantly, the quality culture of the team, then specialists often make less sense.

In the scope of the quality culture transition guide, teams who are in competent or optimizing in all categories need fewer (and sometimes zero) testing specialists.


I enjoyed your video and am encouraged by the content. I moved out of a Test Engineer role a couple years ago for a few reasons and one of the reasons was because I felt under appreciated when I advocated for testing or quality.
Most people saw my role as an automation provider. I had moved on from that thinking as I grew into Test Engineering. That is, I saw myself as someone who helped the project team design, plan, and execute tests in collaboration with the whole team. I thought I was advocating for the Testers and for Quality. While I sensed patient acceptance, the action continued to be a focus on automation.
I like the idea of Testing Advocate. I believe the components (quality, testing, testing methods, et al) of such a role exist in multiple project team members today. Perhaps, today, a Testing Advocate is a collaborative effort within a project team which would be a great start. It could grow into the vision you expressed!