Will AI/ML take my job?

(Lisa) #1

I’ve been slowly learning about AI and ML over the past year. I’m currently reading a book Real-World Machine Learning. I hear people (including my own teammates when I first joined mabl) that ML-driven test automation tools would replace all the testers. I’ve of course heard things like that (“Test automation tools will replace all the testers”) for decades.

Now that I know more about ML, I see that though, like test automation, it can free our time up to do more valuable things than crunch data around, it also opens new opportunities. “Feature engineering”, where you improve the data to use to create the ML model, looks super fascinating and, like exploratory testing, requires creativity, critical thinking, imagination. It looks like something testers would be super good at, though of course it requires learning new skills.

A lot of you probably know a lot more about AI and ML, what opportunities do you see? Or do you think it WILL take our jobs?

1 Like
(Joe) #2

Hello @lisa.crispin!

I’ve been intrigued with the idea of using AI/ML for testing!

The Opportunity
I strongly suspect that, even at the current level of the technology, I could turn loose an AI/ML-driven bot on one our websites and it would determine most of the workflows in less than a day. With that information as a baseline, I would expect to run it daily and report differences it found (differences due to deployments, data diversity, new errors). I believe we still would need someone to make sense of the information that is gathered, and determine how to make it useful as a testing tool.
This is but a rudimentary exploration. As the bot learns more, I might expect more. As the technology matures, I might expect more.

Take Our Jobs
Maybe. Change our jobs? Certainly. I moved from development to testing a while back. At that time, automation was all the rage (The Star East conference that year had everyone thinking automate everything). There has been painful lessons for those who only answer the “Can I?” question with so much automation that they need people just to maintain the automation. Those who patiently answered the second question, “Should I?” are probably benefiting from smaller, isolated tests and frequent deployments. This is why my opportunity above may seem benign but I’m more curious to see how it grows and provide feedback to the community so we (WE!) can direct it to benefit those who need the expertise of our testing services.

Joe

(Lisa) #3

Hi Joe,
Do you know of any tools right now that can determine workflows on their own? I see this as a huge potential of ML but I haven’t seen it in action. I see it mostly used for visual checking and performance checking.

I like what you say about changing our jobs and “should” rather than “can”. Great way to look at these questions. I think we have an opportunity to drive development of test tools that use ML to do what we want them to do for us.
– Lisa

(Alan) #4

I think Eggplant markets itself towards User Journey AI. I am not overly concerned because the demand for software is more than humans can provide. Supervised learning opportunities where AI/ML can learn a function on its own is the most likely to make a huge impact. For example automating credit decisions, classifying photos etc.

2 Likes
(Joe) #5

Hello Lisa!

Thanks!

I don’t know of any specific tools but will continue to search. I had the impression that testim and mabl might do this but have not started to explore them. Above, @alanmbarr mentions Eggplant might do what I described.

Joe

1 Like
(Dimitris) #6

What about the jobs of those who test the AI applications?
Others here mentioned the decades-old “automated testing will take the job of testers” topic.
Just like any other type of technology, it can be useful and improve overall efficiency, meaning that less people might be needed, but anyone who’s worked with AI right now will tell you it’s a long, long way from completely taking over such complex tasks.
Michael Bolton has posted a number of Slack messages & tweets about this :grinning:

1 Like
(Robert) #7

Technology doesn’t take away your job. Managers take away your job.

If your skills are such that there’s a risk that a manager will decide that you can be replaced by an AI system, then there are two alternatives:

  1. Educate the manager about the areas where the AI is inferior to the human system; or

  2. Refocus your own skill set to areas that AI systems don’t cover.

Different managers will be amenable to different approaches. (And of course, middle managers are perhaps as much at risk from expert systems and changing work practices as any IT professional.)

1 Like
(Chris) #8

AI isn’t intelligent. ML doesn’t really “learn”. I always say that you should treat “Artificial Intelligence” with the same emotion as “People’s Democratic Republic Of”.

When AI is truly intelligent, to the point that it can emulate the heuristic emotional learning that humans do, then it may be a threat to testing jobs. When AI can threaten testing jobs it can also threaten coding and management jobs.

As it stands the testing job has been and continues to now be under threat from poor management decisions, offshoring, replacement with test cases, poorly considered broad automation and, I believe, we threaten the future of good testing most of all with poor testing, poor testers and poor process implementation. I think that this is a vastly bigger threat than AI and will continue to be so for the foreseeable future.

Concerning testing ML: Machine learning is incredibly broad, but for example it seems to be often used to create algorithms with vast social impact. The YouTube algorithms that suggest videos and the advertising algorithms that suggest advertising are incredibly powerful. If the algorithm decides that it should show a politically right-leaning person right-wing videos that become more and more extreme this could easily lead people to be radicalised. YouTube has, sometimes, shown innocent-looking videos to children with extreme, unsuitable content. An advertising algorithm may detect or predict when a bipolar person has a manic episode and advertise gambling to them. Large companies may use learning algorithms to detect and deal with bad press using automated astroturfing. This feels, to me, like a good place for testers to have an impact by illustrating the possible negative impact of these algorithms in ways that businesses care about.

Concerning using ML to test: This is also a broad subject but it could be used, for example, to perform predictive alerts by “learning” elements of system states and associating them with failure types. The system would be able to give the tester a breakdown of common states for a failure to help them identify the cause of a problem. This is a great observability tool that improves the testability of the system. It’s also already being used to automatically update user errors being made in automatic check suite code by examining failures of element naming when something has changed and updating the system automatically.

If we’re not going to screw this up then I’d recommend that we ask important questions of our ML solutions, including how our biases in the implementation of the solution will show up in the results. Remember when BDD came along and people were lured by the idea that the “human-readable” side really was what the software was doing? When people read “Valid Login: Passed” and assumed that the login had been suitably tested, despite the fact that the human-readable code was parsed and the meaning abstracted away? That’s exactly what’s going to happen with ML solutions - we’ll find “insight” into our data that betrays whatever we wanted to believe or whatever we want to prove because it won’t be implemented by scientists or mathematicians. We are inventing more and more powerful ways of lying to ourselves and others whilst suppressing the depressingly costly and uncomfortable business of critical thinking. The biggest opportunity I see here, for testers, is one of personal responsibility. For businesses I see amazing opportunities to pad the bottom line and save on costs at the expense of employees and end users with software that makes impactful decisions with no human interaction. My money’s on the money, I must say.

2 Likes
(Lisa) #9

Everything I read cautions about the potential for flawed models and algorithms and the need for intense testing of ML, so that’s another reason I think MORE not FEWER jobs.

(Juan) #10

I only see a new technology that we could “eventually” use in testing. Testing is not about tools, it is mostly about thinking.

The “AI” is a “big” word that DOESNT mean anything at all…it´s used by marketing people to make the products “cooler”. I think we are getting closer to an “AI big crash” before the AI field is taking seriously.

2 Likes
(Andrea Picciau) #11

Hi all,

I’ve also had these concerns lately, so I ended up reading some articles on the topic. I found the following ones particularly helpful:

I’m afraid the future is still uncertain…but it’s not looking too bad for us.

P.S. This is my first post in this community. Yay for me!

1 Like
(Mark Winteringham) #12

Everything I read cautions about the potential for flawed models and algorithms and the need for intense testing of ML, so that’s another reason I think MORE not FEWER jobs.

This is a topic I am really interested in and am reading up on in my spare time (hoping to dedicate more time to it in the future).

Whilst I don’t think ANI (Narrow AI) will take our jobs it is certainly going to change them:

  • Traditional testing approaches such as test cases are not going to work when you are working with systems that are indeterministic by design.
  • This will mean more emphasis on exploratory testing techniques, along with what I believe is the future of Automation in Testing: Automated tools to support testing activities (Data and state management, information analysis, etc.)
  • Testing the idea is going to become more essential as setting goals for AIs need to be thoroughly thought through before implementation as it will become harder and harder to modify AIs as they control more and more of their own development (Which has interesting impacts on CD)

When AI is truly intelligent, to the point that it can emulate the heuristic emotional learning that humans do, then it may be a threat to testing jobs. - @kinofrost

Honestly, if AI or AGI (General AI) becomes equal to human intelligence we will have wider issues than our employment.

Whilst I do feel very tinfoil hat when I talk about these changes. I don’t see anything to the contrary. AI safety in academia has grown massively and is focused on getting the goals of AI right (testing ideas) and talks I have seen and conversations from those actively testing AI are very ET based with a reliance on automation to manage controlling and assessing AI.

- Mark

2 Likes
(Chris) #13

Here’s two resources that brush hard against my thoughts on the current implementations of AI and ML:


(Jack) #14

Agreed on the need for more jobs not fewer. There is a definite need more specialised tester to be able to verify, validate and evaluate the models and algorithms being produced.

This area of testing is very much in its infancy with little understanding or support out there for those doing the role. For those in my team tackling this problem, it is as much a research task as it is a engineering one. Requiring a solid mathematical base to be able to understand and then devise ways to build confidence in what is being produced.

It is both a terrifying and exciting experience to be working in a domain where even the experts in the field are as unsure on how we build confidence in what is being developed is ‘correct’.

Note: I am not talking here about the simpler classification problem solving that is usually given as an example of AI (is this a picture of a cat), but the more complex RL or probabilistic modeling examples where did it do the correct thing is not binary.