I’ve been thinking about AI-based test case classification and wondering where it genuinely adds value.
For example:
• Can AI classify tests based on business risk?
• Can it detect traceability gaps between requirements and validation?
• Can it highlight over-tested vs under-tested areas?
• Can it surface redundant or low-impact test cases?
In complex or regulated systems, classification is about knowing what really matters before release.
Is anyone using AI this way in automated testing? What’s working? What’s not?
Would love to hear real experiences.