If AI built a tester, what would it look like?

It’s inevitable that AI will build software testers to test their fellow AI colleagues. (I joke yet kinda don’t joke)

So, just for fun, if AI built a software tester, what would it look like?

Feel free to use whatever drawing tool you like to share your AI-inspired artist impression. :smiley:

4 Likes

Based on what I’ve seen AI do to draw humans, it would gives us far too many fingers and teeth!

5 Likes

Things I suspect AI might be able to do better than we expect:

  1. Consume requirements, designs and documentation and use them to write test cases
  2. Consider generic, high level quality aspects such as performance in terms of response time in ms
  3. Write plans with justifications why, although maybe not accurately
  4. Apply techniques like fuzzying, chaos and click monkey testing to trigger errors
  5. Raise bug reports when it finds non compliance

Things I think AI would struggle to do well, that humans are good at:

  1. Identify risks that require the context of the feature and how it sits within the wider product, with integrations between teams and external sources
  2. Identify flaws in the current team processes, and suggest improvements
  3. Build relationships with human team members, identifying risks by understanding human concerns
  4. Identify risks to emotional response, goodness or value
  5. Support quality improvements though team building and knowledge sharing
2 Likes

Hotpot
hotpot.ai


creator.nightcafe.studio


dream.ai

tempfile
pixray


craiyon

1 Like

Probably something like Mister Meeseeks

A tester designed by AI will be the smartest stupid tester :sweat_smile: who can do basic work, lots of automation, lot of test cases, avoiding manual tasks, optimising time … and what missing here the human aspect ! even the creativity of robots will be limited and no outside the box thinking. I can’t imagine robots collaborating together or coaching each other through workshops to investigate issues, to highligh all kind of possible risks that could occur …
So I say it again the human skills are the ones that could make the difference with others around you.

1 Like

I’m going to pick this apart a little. I think maybe AI testers is no bad thing.

The problem with all these ML components is ‘it’ll work well, until it doesn’t’ (which ironically is also Testing 101).

The idea of an AI tester which covers the basics sounds great. Although, shouldn’t that already be covered by automation?

Here’s the issue though. Already with ML, it’s hard for people to get an output of justification. ‘Just trust the algorithm’.

We’ve all worked on pieces where there was a high level design, biz analysis was missing, work was delivered ad hoc… “but it’s really important we document the testing for auditing”

How do we see what the AI tested? How does it debrief? How (when we found something was missed) does it reflect and alter it’s approach in future?

I don’t think we have magic answers for this.

3 Likes