🤖 Day 3: List ways in which AI is used in testing

So I’m going to repeat what I mentioned in Day 2 about the understanding around a problem.

I’m a big fan of The Glass Cage by Nicolas Carr and I’ve always liked the idea of algorithmic versus heuristic based activities. A heuristic based activity would be something that is creative and hard to define. For example, capturing the emotion of a beautiful skyline in a painting. Whereas algorithmic activities are more distinct in their actions. For example, making a cup of coffee.

I like this way of thinking because it connects to other ways of sense making such as Cynefin which also talks about the difference between complex and clear problems. Heuristic activities work well in the complex space, whereas algorithmic activities work well in the simple space.

So what does this have to do with AI? I think it’s important because it connects to the discourse around what AI can and cannot do. In the Large Language Model world there is this attitude that because it generates it’s equivalent to heuristic problem solving, and whilst I think it can help in that space, LLMs are more effective with algorithmic problems. My reasoning being that algorithmic problems are based on known knowledge, which is what LLMs are trained upon. Meaning they are more tuned to what is known and generating outputs in that space than being wholly heuristic driven.

To bring this back to today’s question. I think AI works best in places where algorithmic activities occur in testing. Such as:

  • Generating boiler plate classes and objects for automation
  • Producing production code based off of provided unit tests
  • Creating new data sets based off of formalised data structures

In a nutshell, if it’s something that you can comfortably define and explain to another person then AI is more likely to be effective than in a situation where we can’t define and explain the problem.

5 Likes