The possibilities of AI

I recently heard on radio that counseling/psychology could be done with AI in the future. When I went Googling it turns out that this idea is not so recent. The Chabot Will See You Now talks about this idea being used since 2014.

It got me thinking: what other possibilities there are for AI that could help in such a manner? It also got me wondering what the implications for testing might be?

During one rather interesting conversation of Pub Driven Development I did find myself talking about this to a fellow tester in terms of recording what a tester does and abstracting some form of pattern and then implement that as the automation. Basically remove the automation part from the test role and making it almost purely exploratory. This does feel closer to some form of nurturing akin to owning a pet that you sort of let grow as you feed it test journeys. Though whether it could then start to propose new journeys that you’ve not done would be the interesting part…


Along the same lines as Will, you will be able to train the AI to use what ever it is you testing in the way a user would, in the way a tester would and then it can use that to learn about how to test the tool and then perform exploratory testing itself. There will still be areas where a human tester provides value that the AI cannot but a lot of the manual testing of the tool can be handed over to the AI.
Even before that, some AI could be trained to find inconsistencies or issues in requirements, provide estimates for how long it may take to test pieces of work.
Plus probably a hundred things that I can’t even imagine yet.

1 Like

One of the biggest worries for me with “counselling AI”, and to an extent any AI, is the issue of ethics. I’ve seen mental health organisations and charities advertise absolutely awful “advice” on their websites. What would a counselling bot say?

To be able to test that it’s responses are in line with what we want it to say, we first need to agree on what it should say. This is not only a requirements issue, it’s an issue of ethics, psychology, possibly even religion and spirituality, to name a few.

These are extremely complex, age old topics that we haven’t been able to agree on so far, and so I don’t think AI would help with that.

1 Like

I literally just put down my thoughts on the subject of AIs, so here’s the link:

I think our concerns about how our jobs will change, or disappear, is understandable but ultimately not worth the effort. AIs will be so transformative that the nature of human endeavour will be completely altered.

Ethically, I think AIs will be superior to us, due to their lack of an instinctive lizard brain. As brings of pure intellect, they might be cold and distant, but are unlikely to be homicidal.

In the course of most careers already underway, the nature of those careers will change for the better, and then work itself as the primary human use of time will end.

Can’t wait.

Last year, while working on test automation framework Selebot ( I got the idea about that if we can build a robot that will do the testing. this idea got excite me but as of now it’s hard to implement it but it definitely changes the way we are testing. Currently, I consider Test Automation is one kind of the Robot who test the scenario that we are defining, So if we can improve Test Automation way where software automatically write the test cases based on learning our testing patterns (by tracking some way) or it will track smartly our activity during manual testing and generate the scripts.

So basically I’m pretty excited to see AI in Testing.

1 Like

A blog I just saw come up on the Ministry of Testing slack channel about AI and testing

I built a really simple prototype of something we were looking at doing at my previous place of employment. Instead of hardcoding tests it was more an autonomous bot. It would essentially do the following:

  • Where am I?
  • Select randomly from a weighted list of potential actions.
  • Did it do what I expected?
  • Where am I?

Very simple logic and it worked pretty well, except that it was web UI based and the inherent flakiness in automating at the UI level kept biting me. It’d run for 15 minutes before a significant error was thrown; 40 minutes on a good day. Given that we just wanted it to run in the background unattended this wasn’t going to be a useful tool long term with out first cleaning up those errors.

Could be an interesting approach at an API level or lower when operating with state. I plan to explore it further if I ever get enough time.

Garry Kasparov gave an interesting, if high level, talk at Def Con 25 last week.

AI in testing essentially boils down to the usual “testing vs. checking” discussion:

if you regard testing as a sophisticated intellectual task, it will require strong AI to automate it. But if you get an AI that strong, you better directly automate the programming…

On the other hand, “checking” is a trivial yet in its repetitiveness boring and exhausting task that insults the intelligence and abilities of any decent tester and keeps them from their actual work. Worse yet, testing the same functionality over and over again makes testers routine-blinded and makes them loose their ability to question assumptions and spot improvement potentials. This task should already be automated with current tools.

So tools that apply AI to testing (like ReTest) should try to automate the automation, leaving humans only with the challenging and interesting task of real testing.

1 Like

Recent blog post from @davewesterveld about how testers should use AI :smile: