Will AI/ML take my job?

I’ve been slowly learning about AI and ML over the past year. I’m currently reading a book Real-World Machine Learning. I hear people (including my own teammates when I first joined mabl) that ML-driven test automation tools would replace all the testers. I’ve of course heard things like that (ā€œTest automation tools will replace all the testersā€) for decades.

Now that I know more about ML, I see that though, like test automation, it can free our time up to do more valuable things than crunch data around, it also opens new opportunities. ā€œFeature engineeringā€, where you improve the data to use to create the ML model, looks super fascinating and, like exploratory testing, requires creativity, critical thinking, imagination. It looks like something testers would be super good at, though of course it requires learning new skills.

A lot of you probably know a lot more about AI and ML, what opportunities do you see? Or do you think it WILL take our jobs?

1 Like

Hello @lisa.crispin!

I’ve been intrigued with the idea of using AI/ML for testing!

The Opportunity
I strongly suspect that, even at the current level of the technology, I could turn loose an AI/ML-driven bot on one our websites and it would determine most of the workflows in less than a day. With that information as a baseline, I would expect to run it daily and report differences it found (differences due to deployments, data diversity, new errors). I believe we still would need someone to make sense of the information that is gathered, and determine how to make it useful as a testing tool.
This is but a rudimentary exploration. As the bot learns more, I might expect more. As the technology matures, I might expect more.

Take Our Jobs
Maybe. Change our jobs? Certainly. I moved from development to testing a while back. At that time, automation was all the rage (The Star East conference that year had everyone thinking automate everything). There has been painful lessons for those who only answer the ā€œCan I?ā€ question with so much automation that they need people just to maintain the automation. Those who patiently answered the second question, ā€œShould I?ā€ are probably benefiting from smaller, isolated tests and frequent deployments. This is why my opportunity above may seem benign but I’m more curious to see how it grows and provide feedback to the community so we (WE!) can direct it to benefit those who need the expertise of our testing services.

Joe

Hi Joe,
Do you know of any tools right now that can determine workflows on their own? I see this as a huge potential of ML but I haven’t seen it in action. I see it mostly used for visual checking and performance checking.

I like what you say about changing our jobs and ā€œshouldā€ rather than ā€œcanā€. Great way to look at these questions. I think we have an opportunity to drive development of test tools that use ML to do what we want them to do for us.
– Lisa

I think Eggplant markets itself towards User Journey AI. I am not overly concerned because the demand for software is more than humans can provide. Supervised learning opportunities where AI/ML can learn a function on its own is the most likely to make a huge impact. For example automating credit decisions, classifying photos etc.

2 Likes

Hello Lisa!

Thanks!

I don’t know of any specific tools but will continue to search. I had the impression that testim and mabl might do this but have not started to explore them. Above, @alanmbarr mentions Eggplant might do what I described.

Joe

1 Like

What about the jobs of those who test the AI applications?
Others here mentioned the decades-old ā€œautomated testing will take the job of testersā€ topic.
Just like any other type of technology, it can be useful and improve overall efficiency, meaning that less people might be needed, but anyone who’s worked with AI right now will tell you it’s a long, long way from completely taking over such complex tasks.
Michael Bolton has posted a number of Slack messages & tweets about this :grinning:

1 Like

Technology doesn’t take away your job. Managers take away your job.

If your skills are such that there’s a risk that a manager will decide that you can be replaced by an AI system, then there are two alternatives:

  1. Educate the manager about the areas where the AI is inferior to the human system; or

  2. Refocus your own skill set to areas that AI systems don’t cover.

Different managers will be amenable to different approaches. (And of course, middle managers are perhaps as much at risk from expert systems and changing work practices as any IT professional.)

1 Like

AI isn’t intelligent. ML doesn’t really ā€œlearnā€. I always say that you should treat ā€œArtificial Intelligenceā€ with the same emotion as ā€œPeople’s Democratic Republic Ofā€.

When AI is truly intelligent, to the point that it can emulate the heuristic emotional learning that humans do, then it may be a threat to testing jobs. When AI can threaten testing jobs it can also threaten coding and management jobs.

As it stands the testing job has been and continues to now be under threat from poor management decisions, offshoring, replacement with test cases, poorly considered broad automation and, I believe, we threaten the future of good testing most of all with poor testing, poor testers and poor process implementation. I think that this is a vastly bigger threat than AI and will continue to be so for the foreseeable future.

Concerning testing ML: Machine learning is incredibly broad, but for example it seems to be often used to create algorithms with vast social impact. The YouTube algorithms that suggest videos and the advertising algorithms that suggest advertising are incredibly powerful. If the algorithm decides that it should show a politically right-leaning person right-wing videos that become more and more extreme this could easily lead people to be radicalised. YouTube has, sometimes, shown innocent-looking videos to children with extreme, unsuitable content. An advertising algorithm may detect or predict when a bipolar person has a manic episode and advertise gambling to them. Large companies may use learning algorithms to detect and deal with bad press using automated astroturfing. This feels, to me, like a good place for testers to have an impact by illustrating the possible negative impact of these algorithms in ways that businesses care about.

Concerning using ML to test: This is also a broad subject but it could be used, for example, to perform predictive alerts by ā€œlearningā€ elements of system states and associating them with failure types. The system would be able to give the tester a breakdown of common states for a failure to help them identify the cause of a problem. This is a great observability tool that improves the testability of the system. It’s also already being used to automatically update user errors being made in automatic check suite code by examining failures of element naming when something has changed and updating the system automatically.

If we’re not going to screw this up then I’d recommend that we ask important questions of our ML solutions, including how our biases in the implementation of the solution will show up in the results. Remember when BDD came along and people were lured by the idea that the ā€œhuman-readableā€ side really was what the software was doing? When people read ā€œValid Login: Passedā€ and assumed that the login had been suitably tested, despite the fact that the human-readable code was parsed and the meaning abstracted away? That’s exactly what’s going to happen with ML solutions - we’ll find ā€œinsightā€ into our data that betrays whatever we wanted to believe or whatever we want to prove because it won’t be implemented by scientists or mathematicians. We are inventing more and more powerful ways of lying to ourselves and others whilst suppressing the depressingly costly and uncomfortable business of critical thinking. The biggest opportunity I see here, for testers, is one of personal responsibility. For businesses I see amazing opportunities to pad the bottom line and save on costs at the expense of employees and end users with software that makes impactful decisions with no human interaction. My money’s on the money, I must say.

4 Likes

Everything I read cautions about the potential for flawed models and algorithms and the need for intense testing of ML, so that’s another reason I think MORE not FEWER jobs.

I only see a new technology that we could ā€œeventuallyā€ use in testing. Testing is not about tools, it is mostly about thinking.

The ā€œAIā€ is a ā€œbigā€ word that DOESNT mean anything at all…itĀ“s used by marketing people to make the products ā€œcoolerā€. I think we are getting closer to an ā€œAI big crashā€ before the AI field is taking seriously.

2 Likes

Hi all,

I’ve also had these concerns lately, so I ended up reading some articles on the topic. I found the following ones particularly helpful:

I’m afraid the future is still uncertain…but it’s not looking too bad for us.

P.S. This is my first post in this community. Yay for me!

1 Like

Everything I read cautions about the potential for flawed models and algorithms and the need for intense testing of ML, so that’s another reason I think MORE not FEWER jobs.

This is a topic I am really interested in and am reading up on in my spare time (hoping to dedicate more time to it in the future).

Whilst I don’t think ANI (Narrow AI) will take our jobs it is certainly going to change them:

  • Traditional testing approaches such as test cases are not going to work when you are working with systems that are indeterministic by design.
  • This will mean more emphasis on exploratory testing techniques, along with what I believe is the future of Automation in Testing: Automated tools to support testing activities (Data and state management, information analysis, etc.)
  • Testing the idea is going to become more essential as setting goals for AIs need to be thoroughly thought through before implementation as it will become harder and harder to modify AIs as they control more and more of their own development (Which has interesting impacts on CD)

When AI is truly intelligent, to the point that it can emulate the heuristic emotional learning that humans do, then it may be a threat to testing jobs. - @kinofrost

Honestly, if AI or AGI (General AI) becomes equal to human intelligence we will have wider issues than our employment.

Whilst I do feel very tinfoil hat when I talk about these changes. I don’t see anything to the contrary. AI safety in academia has grown massively and is focused on getting the goals of AI right (testing ideas) and talks I have seen and conversations from those actively testing AI are very ET based with a reliance on automation to manage controlling and assessing AI.

- Mark

2 Likes

Here’s two resources that brush hard against my thoughts on the current implementations of AI and ML:


Agreed on the need for more jobs not fewer. There is a definite need more specialised tester to be able to verify, validate and evaluate the models and algorithms being produced.

This area of testing is very much in its infancy with little understanding or support out there for those doing the role. For those in my team tackling this problem, it is as much a research task as it is a engineering one. Requiring a solid mathematical base to be able to understand and then devise ways to build confidence in what is being produced.

It is both a terrifying and exciting experience to be working in a domain where even the experts in the field are as unsure on how we build confidence in what is being developed is ā€˜correct’.

Note: I am not talking here about the simpler classification problem solving that is usually given as an example of AI (is this a picture of a cat), but the more complex RL or probabilistic modeling examples where did it do the correct thing is not binary.

Over the past decade, technologies have evolved drastically, there have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good input/output combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

So, contrary to popular belief, the outlook is not all ā€˜doom-and-gloom;’ being a real, live human does have its advantages.

For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ā€˜look and feel’ of an on-screen component is ā€˜off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

Greetings again, Everyone!

I had a recent AI experience I wanted to share that, I believe, highlights some of the thoughts above.

I have been evaluating cloud-based automation frameworks. One vendor claimed that their AI can detect and correct simple UI element changes. The nature of such a change might be a location change or element size. During our first meeting, they demonstrated this feature. They set up a script to evaluate a web page. They changed the location of element on that web page, and executed the script. Their AI detected and corrected the change. No one had to update the script and the script completed successfully.

I have no doubt that from this simple example the vendor will continue to grow the capability. Indeed, I think it has promise. What struck me was their framework provided only the smallest hint that the script needed a change to complete.

Perhaps its me. I’m not ready to accept the conclusion of a feature (driven by AI, by ML, or driven by some other technology) that detects an anomaly AND makes a correction. Just as AI/ML requires ā€œtrainingā€ and can adapt as it is used more, I too must be able grow my trust in the decisions and changes made by such a feature.

Is it a fear of taking my job? I don’t think so. When we hire new people who are trained in a newer technology, I embrace their contributions. However, there is some period of time that we limit their responsibility while providing opportunities to grow in the technology and the domain. Once they demonstrate their capabilities, responsibility and trust follow. AI/ML deserves/requires the same and perhaps a little more of that type of scrutiny.

Joe

In our company we use only AI for testing. It works well.