AI isn’t intelligent. ML doesn’t really “learn”. I always say that you should treat “Artificial Intelligence” with the same emotion as “People’s Democratic Republic Of”.
When AI is truly intelligent, to the point that it can emulate the heuristic emotional learning that humans do, then it may be a threat to testing jobs. When AI can threaten testing jobs it can also threaten coding and management jobs.
As it stands the testing job has been and continues to now be under threat from poor management decisions, offshoring, replacement with test cases, poorly considered broad automation and, I believe, we threaten the future of good testing most of all with poor testing, poor testers and poor process implementation. I think that this is a vastly bigger threat than AI and will continue to be so for the foreseeable future.
Concerning testing ML: Machine learning is incredibly broad, but for example it seems to be often used to create algorithms with vast social impact. The YouTube algorithms that suggest videos and the advertising algorithms that suggest advertising are incredibly powerful. If the algorithm decides that it should show a politically right-leaning person right-wing videos that become more and more extreme this could easily lead people to be radicalised. YouTube has, sometimes, shown innocent-looking videos to children with extreme, unsuitable content. An advertising algorithm may detect or predict when a bipolar person has a manic episode and advertise gambling to them. Large companies may use learning algorithms to detect and deal with bad press using automated astroturfing. This feels, to me, like a good place for testers to have an impact by illustrating the possible negative impact of these algorithms in ways that businesses care about.
Concerning using ML to test: This is also a broad subject but it could be used, for example, to perform predictive alerts by “learning” elements of system states and associating them with failure types. The system would be able to give the tester a breakdown of common states for a failure to help them identify the cause of a problem. This is a great observability tool that improves the testability of the system. It’s also already being used to automatically update user errors being made in automatic check suite code by examining failures of element naming when something has changed and updating the system automatically.
If we’re not going to screw this up then I’d recommend that we ask important questions of our ML solutions, including how our biases in the implementation of the solution will show up in the results. Remember when BDD came along and people were lured by the idea that the “human-readable” side really was what the software was doing? When people read “Valid Login: Passed” and assumed that the login had been suitably tested, despite the fact that the human-readable code was parsed and the meaning abstracted away? That’s exactly what’s going to happen with ML solutions - we’ll find “insight” into our data that betrays whatever we wanted to believe or whatever we want to prove because it won’t be implemented by scientists or mathematicians. We are inventing more and more powerful ways of lying to ourselves and others whilst suppressing the depressingly costly and uncomfortable business of critical thinking. The biggest opportunity I see here, for testers, is one of personal responsibility. For businesses I see amazing opportunities to pad the bottom line and save on costs at the expense of employees and end users with software that makes impactful decisions with no human interaction. My money’s on the money, I must say.