🤖 Day 19: Experiment with AI for test prioritisation and evaluate the benefits and risks

I’ve been going through a bit of journey with AI. When I started looking at this in early January, I had great expectations. The more I’ve learnt, the less confident I’ve become. The key here is “bias”. The LMMs that we use at the moment have inherent biases that no matter how much “training” we provide, will always exist.

When it comes to prioritisation, the tests we select are context specific. Further, each context is different. We would need an LLM that can be trained for every context that could possibly arise before we need it. The LLM would also need to understand current market trends for your product, the financial needs for your company, the desires of your marketing department to future proof your product, the skills of your team and testers as well as knowing intrinsically all of the tests you have. These factors are all considered consciously or unconsciously whenever we assess what tests we need to run.

I would argue that for an activity such as “prioritisation”, using an LLM could at best provide some guidelines, but at worst, could mean that you use the wrong tests and set things further back. Personally, I would not use an LLM to prioritise any work at the moment.

6 Likes