The Future Of The Tester Role (Personal Take)
What is a Tester
A tester is someone or some thing that tests a system: a voltmeter, an ammeter – these are testers. The person doing the test is also considered a tester: the telecomms lineman testing for a broken line, the radio repairman, trying to figure out why the signal is not sending or receiving – they are essentially performing the role of a tester. Even programmed testing – where a computer or AI would perform the checks on a system – yup, I’d still call that part of testing, but whether or not a programmed test is sufficient enough is another question. As we can see testing is far from dead, it is a basic part of any working system.
What about software testing
Software created a situation where testing is more complex than just tracing a signal, or testing for a broken line. It is more akin to lab engineering, where the engineer thinks of all possible scenarios that the lab prototype is supposed to survive (or “Pass”, in testing parlance). This is exploratory testing. Automation becomes more of an engineer’s tool, increases test efficiency, and theoretically, should increase the scope of testing. As long as there is a product to develop in a lab, testing would be part of the delivery process. Whether it’s automation-assisted testing or not.
The rise of AGI / ML
AGI, if truly capable of exploratory testing, would be using the same parameters / premise as a human – except it would be fully integrated with the current automation checks we are familiar with. So as far as that goes, yeah, AGI should be able to test. Infallibly? I’d take a guess, “No.” How many time do we get a false positive in an automated check? Or How many times do we get a false negative? The fact that these do happen indicates that automation – and AI by extension – is inherently constructed on a fallible platform. Peer testing would still be a requirement. Grammarly and other text proofreader software are a good case in point; I find them very helpful, accept some of the suggestions, but override it to suit my writing style. (As a matter of fact, there were quite a few places where a paragraph in this piece was “allowed” by the proofreader, but I deemed it was out of place and moved it elsewhere.)
AGI that codes test scripts
What if AI could write its own automated tests? If human input is required, that would be an injection point for fallibility already (input shallow criteria, output is shallow results). If AGI were to be exploratory, exhausting all the statistical probabilities, it would be a larger scope of performed tests, but AI would still need to “think” about the basic criteria – in which case, how would it reason what actually would be the basis for a “Pass”? if it is Machine Learning, it would be basing its premises / principles the same as a human would. (ML is practically human learning replicated in a box.) So, if human exploratory testing is fallible, so it would be with AI exploratory testing. (Again, AI would have greater scope in testing, but their tests would not be infallible; they are a reflection of us.)
The near perfect test scope
So let’s say the AGI was able to account for all probable scenarios in its testing; and was able to deliver its test report. How would it look? I imagine (if it was responsible enough) something in the same vein as:
Test subject: _____
Feature/s tested: (list) = Pass(or Fail)
Scenario/s tested: (list)
Note that if the AGI simply said “All pass, systems go” – then an unforseen / low probability event strikes and sends the product crashing – it would probably say that the event was a low probability scenario and was treated as such, eliciting the automated error message: “Event scenario outside design scope.” Exactly the same response resulting from human testing.
Then there is the thing about UI/UX that is specific/unique to the observer. And in this case, AGI cannot replace humans.
- If the software was designed for human consumption, humans would be the ultimate UAT testers.
- If it was designed for AGI consumption, let AGI UAT test it – if they’re happy, all good
- if it was destined for the Aenar species (sorry, Trekkie here), who somehow subcontracted their product on earth, then Aenar UAT testers should decide whether the product tests “Pass” or not.
I’d suggest the rule of thumb should be, let the intended audience test.
- Will AI replace humans in the testing role: No.
- Will it be more of a peer testing practice: Ideally yes.
- Will AI and humans peer up in the testing role: defo (coded testing is a testament to this).
- How soon? TBD maybe the beginnings could be seen within our lifetime?