Can we collate a set of heuristics for AI in Testing? For example, one heuristic might be the “Trained or not?” heuristic. For example, when relying on GPT how do we check the data it was trained on and how reliable it is? How does that impact our test ideas, approach, and strategy?
Much like the classic Test Heuristics Cheat Sheet, what opportunity do we have to create an AI in Testing Heuristics Cheat Sheet?
In manual testing, semi-AI tools can assist by generating test scripts based on the requirements or even by analyzing the UI through AI.
A practical way to measure the quality of these AI-generated tests is by using standard metrics such as the failure rate or the number of defects found.
These metrics can serve as valuable heuristics to assess the effectiveness of the AI tools.