Hi everyone,
I’m curious in your view, where should AI support us most in testing? I’ve gathered a few thoughts and would love to hear your experiences:
Automatic Test Case Generation
AI can generate test cases based on requirements, code changes or user stories. While this promises time savings and broader coverage, the reliability of such test cases is often inconsistent. I experienced that it works quite good for common business logic (like online shops) but for complex logic, human input is still hard to replace.
Synthetic Test Data Generation
Generating realistic and GDPR-compliant data is time-consuming. Did you already create such datasets? I tried but maintaining consistency and traceability was tricky. Also, the output is only as good as the data used to train AI - like in all other use cases.
Test Automation via Natural Language
Some tools let users write tests in plain English, which the AI translates into executable scripts. Its imho pretty similar to screenrecording somehow (looking at the output). This lowers the barrier for non-developers, but can lead to vague or brittle tests. But it might evolve very soon because vibe coding is a thing.
Test Prioritization with Risk Prediction
AI can analyze code changes and past bugs to suggest which test cases are most critical. This improves efficiency. However, there’s a risk of overlooking rare but important issues if the model isn’t tuned well. My colleagues and me are currently trying to find a way to finetune these models because this reveals the most value to us at the moment in our daily testing.
What’s your take? Where does AI help you already and where not at all? What’s still missing for broader adoption?
AI tools can also help us in debugging issues.
Now with the help of AI we can not only generate executable scripts but also E2E framework.
Other use cases of AI in testing is creating QA Documents, like bug report, qa report, test plan, etc.
Apart from that AI can also be priortization of tasks in testing. AI can also helps in Devops.
So its more of like think of anything in testing and we can see now use case of AI in that.
Use cases may exist but it comes with some difficult questions, how good AI is for us if we are using it in every task and where do we set boundary that how much we have to utilize it in testing.
“Use cases may exist but it comes with some difficult questions, how good AI is for us if we are using it in every task and where do we set boundary that how much we have to utilize it in testing.”
Yeah, that´s true. There is no real limitation.
“Other use cases of AI in testing is creating QA Documents, like bug report, qa report, test plan, etc.”
Pretty good example and probably one of the masterclasses of AI.
I’m curious about how AI will make writing a bug report simpler? Presumably I’d still be required to provide the AI with my pre-req’s, repro steps, expected behaviours and actual behaviours… so wouldn’t it just be simpler to write the report myself using a good formatting template?
While raising bug in jira the summary has limit of 250 characters and sometime when we try to write the summary, it cross the 250 characters and in such situation, ai can help us to trim the length of summary so that it fits within limit.
That’s one use case of using ai to make the bug report simpler which I have used.
That´s true but AI is proven to be good in summarizing things by removing duplicate, unstructured content. Humans sometimes tend to believe that more content and more examples are helpful - actually, it´s sometimes better to focus the pure path.