Where do you see the most need for AI Support?

Hi everyone,
I’m curious in your view, where should AI support us most in testing? I’ve gathered a few thoughts and would love to hear your experiences:


:brain: Automatic Test Case Generation
AI can generate test cases based on requirements, code changes or user stories. While this promises time savings and broader coverage, the reliability of such test cases is often inconsistent. I experienced that it works quite good for common business logic (like online shops) but for complex logic, human input is still hard to replace.


:bar_chart: Synthetic Test Data Generation
Generating realistic and GDPR-compliant data is time-consuming. Did you already create such datasets? I tried but maintaining consistency and traceability was tricky. Also, the output is only as good as the data used to train AI - like in all other use cases.


:speaking_head: Test Automation via Natural Language
Some tools let users write tests in plain English, which the AI translates into executable scripts. Its imho pretty similar to screenrecording somehow (looking at the output). This lowers the barrier for non-developers, but can lead to vague or brittle tests. But it might evolve very soon because vibe coding is a thing.


:bullseye: Test Prioritization with Risk Prediction
AI can analyze code changes and past bugs to suggest which test cases are most critical. This improves efficiency. However, there’s a risk of overlooking rare but important issues if the model isn’t tuned well. My colleagues and me are currently trying to find a way to finetune these models because this reveals the most value to us at the moment in our daily testing.


:backhand_index_pointing_right: What’s your take? Where does AI help you already and where not at all? What’s still missing for broader adoption?

Looking forward to your insights!

3 Likes

AI tools can also help us in debugging issues.
Now with the help of AI we can not only generate executable scripts but also E2E framework.

Other use cases of AI in testing is creating QA Documents, like bug report, qa report, test plan, etc.

Apart from that AI can also be priortization of tasks in testing. AI can also helps in Devops.

So its more of like think of anything in testing and we can see now use case of AI in that.

Use cases may exist but it comes with some difficult questions, how good AI is for us if we are using it in every task and where do we set boundary that how much we have to utilize it in testing.

2 Likes

“Use cases may exist but it comes with some difficult questions, how good AI is for us if we are using it in every task and where do we set boundary that how much we have to utilize it in testing.”

Yeah, that´s true. There is no real limitation.

“Other use cases of AI in testing is creating QA Documents, like bug report, qa report, test plan, etc.”

Pretty good example and probably one of the masterclasses of AI.

3 Likes

I’m curious about how AI will make writing a bug report simpler? Presumably I’d still be required to provide the AI with my pre-req’s, repro steps, expected behaviours and actual behaviours… so wouldn’t it just be simpler to write the report myself using a good formatting template?

2 Likes

While raising bug in jira the summary has limit of 250 characters and sometime when we try to write the summary, it cross the 250 characters and in such situation, ai can help us to trim the length of summary so that it fits within limit.
That’s one use case of using ai to make the bug report simpler which I have used.

2 Likes

It can help in Bugs for

  1. Prioritization
  2. Categorization
  3. Grouping
  4. Finding Duplicity
  5. Add steps where missing
2 Likes

I assume in this scenario that you would have to be very careful that the AI tool hasn’t trimmed any important information from the bug report.

2 Likes

That´s true but AI is proven to be good in summarizing things by removing duplicate, unstructured content. Humans sometimes tend to believe that more content and more examples are helpful - actually, it´s sometimes better to focus the pure path.

Hello,

But If I’m new to testing and I’m the only tester in the company, how can I know whether the testing plan and testing estimates that AI created is actually correct or not?

I heard a lot about using AI as a trainee under your supervision. You have to back its outputs.
So I’ll advise you to carefully vet the output and genuinely wonder if it reflects your honest thought.

Because you were wondering “how can I know if it’s correct or not”, to me it follow the same process than when you review your own job for correctness. Why would it have to be different ?

Hi Maria

I’m going down a different route here as I see AI able to assist throughout the SDLC - from requirements review, test planning through to test execution.

  • Assisting us with reviewing requirements for ambiguities, anything that we may have missed when we do our human review
  • Helping us to craft a Test Plan that is readable, and covers all the info that needs to be included - we may struggle to work out the best layout/format so AI can help structure docs to make them easier for Stakeholders to read.
  • Reviewing our tests against requirements to see if there is anything we have misread/misunderstood/missed - AI acts like a second pair of eyes.
  • As mentioned previously, helping with test data creation and test automation.
  • Collating stats for test coverage, test execution, defects - numbers, priorities, and heatmap analysis.
  • Helping with the Exit report - similar to the Test Plan, ensuring that if one is needed, it is readable, complete and meets the needs of stakeholders.

I love how you have asked this question as the most important point here is that AI is there to support us (I used the word Augment in my talk/blog) not replace us or do all the work for us.

Steve

If AI can help get testers away from the whole idea of test cases that would be good.

Think of the activities needed for testing, if it leans towards mechanical strengths look to AI to see if it can cover that.

The broader adoption is getting blocked by the premise that a lot of testers are actively doing activities that favour mechanical strengths in this age of testing, its very similar to 2004 when automation tools made the same argument that there actually exist testers writing test cases and rotely executing without curiosity. This leads to AI being promoted to try and solve problems that a lot of testers do not have.

If there was more of a focus on testing as a discovery, learning, investigation and experimentation activity and looking for where AI can assist on this it would more likely get broader adoption.

Adopting it for mech strength activities is not the challenge, that’s almost a given but its the much bigger broader picture of testing that a lot of testers will remain rightly skeptical of, testing itself at this point remains primarily a human strength activity, mechs attempting to simulate that may not offer that much advantage. Perhaps when it shifts from mechanical process to a biomech process it may offer more options for uptake.