Bloggers Club August 2022 - The Future Of The Tester Role

Hi folks!

Later than usual, but here’s the topic for this month. Which came about in last months “Crowd sourcing of topics”. As usual feel free to tweak the title if that suits you better. e.g. “The future of testing as an X”

The Future Of The Tester Role


This topic has so much scope for people to share their experiences and views on how the role of “tester” or similar roles are adapting to changes in the industry. I’m sure we’ll see some ideas on Quality Engineering, Quality Assistance and more!


How to get involved

  • Write a blog on the above topic any time in August , ideally by the 31st :writing_hand:
    • I’ve made a few late submissions. We’re also in Summer time for the Northern Hemisphere.
  • It can be as long or as short as you want it to be
  • Share a link to the blog on this thread :eyes:
    • If you don’t have a blog feel free to write directly in the thread.
  • Receive lots of support, encouragement, and love from the community :heart:
  • It’s possible you’ll get a shout out from the Ministry of Testing Twitter account @simon_tomes :grinning:
  • If you want to get reminders to submit your blog, RSVP below

Here you go, my offering:


Oh lovely a bloggers club - I haven’t discovered this yet (oh why lord… why… so late…)

But now I have started a blog draft with some thoughts and will contribute. Awesome.


I’ve been mulling over my own future and what roles there might be next year and further down the line and finally write a block post on it.

Rather focused on myself vs future of testing and I’ve yet to figure out how to reduce the verbosity on my brain when writing blogs but thought I’d share it:

1 Like

oh my …

I am bit of behind… Might be published latest on the weekend…

1 Like

The Future Of The Tester Role (Personal Take)

What is a Tester
A tester is someone or some thing that tests a system: a voltmeter, an ammeter – these are testers. The person doing the test is also considered a tester: the telecomms lineman testing for a broken line, the radio repairman, trying to figure out why the signal is not sending or receiving – they are essentially performing the role of a tester. Even programmed testing – where a computer or AI would perform the checks on a system – yup, I’d still call that part of testing, but whether or not a programmed test is sufficient enough is another question. As we can see testing is far from dead, it is a basic part of any working system.

What about software testing
Software created a situation where testing is more complex than just tracing a signal, or testing for a broken line. It is more akin to lab engineering, where the engineer thinks of all possible scenarios that the lab prototype is supposed to survive (or “Pass”, in testing parlance). This is exploratory testing. Automation becomes more of an engineer’s tool, increases test efficiency, and theoretically, should increase the scope of testing. As long as there is a product to develop in a lab, testing would be part of the delivery process. Whether it’s automation-assisted testing or not.

The rise of AGI / ML
AGI, if truly capable of exploratory testing, would be using the same parameters / premise as a human – except it would be fully integrated with the current automation checks we are familiar with. So as far as that goes, yeah, AGI should be able to test. Infallibly? I’d take a guess, “No.” How many time do we get a false positive in an automated check? Or How many times do we get a false negative? The fact that these do happen indicates that automation – and AI by extension – is inherently constructed on a fallible platform. Peer testing would still be a requirement. Grammarly and other text proofreader software are a good case in point; I find them very helpful, accept some of the suggestions, but override it to suit my writing style. (As a matter of fact, there were quite a few places where a paragraph in this piece was “allowed” by the proofreader, but I deemed it was out of place and moved it elsewhere.)

AGI that codes test scripts
What if AI could write its own automated tests? If human input is required, that would be an injection point for fallibility already (input shallow criteria, output is shallow results). If AGI were to be exploratory, exhausting all the statistical probabilities, it would be a larger scope of performed tests, but AI would still need to “think” about the basic criteria – in which case, how would it reason what actually would be the basis for a “Pass”? if it is Machine Learning, it would be basing its premises / principles the same as a human would. (ML is practically human learning replicated in a box.) So, if human exploratory testing is fallible, so it would be with AI exploratory testing. (Again, AI would have greater scope in testing, but their tests would not be infallible; they are a reflection of us.)

The near perfect test scope
So let’s say the AGI was able to account for all probable scenarios in its testing; and was able to deliver its test report. How would it look? I imagine (if it was responsible enough) something in the same vein as:

Test subject: _____
Evaluation: ______
Feature/s tested: (list) = Pass(or Fail)
Caveat/s: (list)
Scenario/s tested: (list)
Risk/s: (list)

Note that if the AGI simply said “All pass, systems go” – then an unforseen / low probability event strikes and sends the product crashing – it would probably say that the event was a low probability scenario and was treated as such, eliciting the automated error message: “Event scenario outside design scope.” Exactly the same response resulting from human testing.

Then there is the thing about UI/UX that is specific/unique to the observer. And in this case, AGI cannot replace humans.

  • If the software was designed for human consumption, humans would be the ultimate UAT testers.
  • If it was designed for AGI consumption, let AGI UAT test it – if they’re happy, all good :slight_smile:
  • if it was destined for the Aenar species (sorry, Trekkie here), who somehow subcontracted their product on earth, then Aenar UAT testers should decide whether the product tests “Pass” or not.

I’d suggest the rule of thumb should be, let the intended audience test.

Personal Conclusion:

  1. Will AI replace humans in the testing role: No.
  2. Will it be more of a peer testing practice: Ideally yes.
  3. Will AI and humans peer up in the testing role: defo (coded testing is a testament to this).
  4. How soon? TBD :slight_smile: maybe the beginnings could be seen within our lifetime?
1 Like

Cheers Ben, I really enjoyed that! I saw a lot of parellels with my own journey. I wonder how the predictions will pan out :eyes:

1 Like

Better late than never! Summer holidays are great for a break. Not so much for writing time.

Thanks for the post Rich. I particulalry liked the inclusion of your dream role. Thats a great way to sum up what you want.

Do you think you need to specialise to evolve with the industry? e.g. into pen testing

Firstly welcome to the club! And thanks for getting involved.

The idea that AI ML will not replace people, but provide us additional tools. More an automation in testing sort of idea is a great take

That is a tough question. Personally I think it is likely for my own career because the majority of the industry seems to be moving to automation engineers, which isnt something that i want to do. I think this pushes me to look to be more specialised. At least if I want a good salary.

That said, if we were to see developers take more responsibility for automation then people employed as testers/test specialists might be most desirable if they have a range of skills, which does seems quite exciting if it were to happen.

1 Like