Rethinking QA Strategy with Next-Gen AI & Agentic AI

As we move towards AI-augmented Quality Assurance, we’re aligning our QA cycle with Generative AI and Agentic AI to improve speed, coverage, and decision-making.

A[User Story] → B[Verify User Story]
B → C[Mind Map Creation / Test Case Creation]
C → D[Identify Automated & Integration Test Cases]
D → E[Add Additional Test Cases]
E → F[QA Approval for Integration Test Cases]
F → G[Receive Final Feature for Testing]
G → H[Automation / Integration / Manual Testing]
H → I[Acceptance for Final Release]

How ready are we as an industry to let AI agents co-own QA cycles? Should AI just assist testers—or autonomously execute and approve test cases?

1 Like

An important factor is the model of testing you are leaning towards. Testing as a verification type activity or testing as a learning type activity.

In the former I suspect a higher likelihood of AI agents taking on some more tasks. In the latter its more a tool to assist in my view. I do recommend a guided, interactive, reviewed and revised approach either way.

Take a potentially counter intuitive quirk of software testing, the idea of executing test cases designed and created by someone else fills a lot of testers with absolute horror. Some people are ranting about AI’s ability to create test cases, well fine but yes we want it executing them or at least helping SDET’s to automate them, otherwise its back to that horror situation.

It is in no way a positive for AI to generate test cases if they then get passed to a tester to execute them, it misses a lot of good points on testing.

On things like documentation I can leverage from AI but that’s a very small part of my role. On automation, this for me seems more likely but in a similar way as coders use it as a copilot style, there are some interesting things with mcp and agents on automation but whether or not the direct from product source code automation creation is viewed as more optimal remains to be seen. If you do not currently have access to product code its likely this will change for many.

For the most part though my testing leans more towards that learning activity model though, so far its likely more of a muse than an agent, for example dev tools now has AI assistance with it, dev tools is a primary tool for an exploratory web app testers having a muse here potentially spotting risks, root causes and solutions could amplify this. Similarly I suspect we will see code level access increase even for exploratory testing, testing direct from IDE’s at code level, guided and interactive again.

Useful potential tools but still need experiments and practice to establish real value. I suspect a lot of my day to day stuff will in principle be the same, just some extra tools to help out.

3 Likes

Thank you for the comment. Based on your description, I understand that Gen AI tools or Agentic AI tools will help with testing, but should the testing strategy remain the same and do we need to adjust the process?

Its very interesting the way you’ve worded your question. On the one hand you “ARE” aligning your QA cycle with AI. On the other hand, you’re asking the community “are we ready to do that?” So there is a hesitancy in your question around responsibility and accountability. AI will never be held responsible or accountable for the outcomes, people will. So if you’re happy to be responsible for the outcomes, go for it and see where it takes you.

Have a read of this thread started by @aimantirmizi around that very dilemma around the human vs AI conundrum.

3 Likes

Well noted. IT organizations are still struggling to identify the exact use cases for AI.

2 Likes

My two cents here. AI is a tool, so treat it as a tool. I would never involve it into any process that requires thinking.

2 Likes

I think that the process is different in every situation and is often adjusted within those situations fairly consistently.

If you are asking if tools should dictate the process the idealistic answer is no. Tools should ideally serve our purposes, rather than make demands of us. Of course every tool comes with disadvantages, leaky abstractions, what RST would call scripting for our testing. Tools must be worth that sacrifice, and combined with variation that makes our results suitably reliable.

Testing is about humans learning. The more we offload that into tooling the further we put ourselves from that knowledge. The more we put trust and belief into tools, especially if blind to how testing works, what it is and what it is for. And, of course, trust and belief is how bugs got there in the first place. If we are going to be responsible, professional skeptics then use of any tool, AI or not, is a professional responsibility in this regard. All else is wish thinking and marketing.

1 Like

Thanks @kayu for this thread. As a part-time consultant who mostly does training and quality practices assessments, I don’t get much chance to use GenAI/LLM tools hands-on. @andrewkelly2555 what you say here makes sense to me, especially about using the tools to generate test cases for humans to execute - it takes us back to the 80s. Also, as far as generating mind maps - the purpose of mind mapping is collaborative brainstorming, getting diverse perspectives together and help everyone think outside the box. The artifact of a mind map is not the point.

I’m seeing lots of experience reports like @mwinteringham ‘s that show benefits of the LLM tools, I appreciate everyone sharing their learning journeys!

@kayu,

In QA, there is immense potential promisingly with innovations such as Agentic AI and Generative AI. An increase in the speed, coverage, and quality of decision-making in QA will result in new efficiencies such tools can unlock. In contrast, AI’s ownership of the entire cycle is still questioned with regard to how far it should be automized.

The trustworthiness of fully AI agent-driven QA cycles still seems to receive some reluctance, even as the wider tech space moves forward. Even though a human touch seems to be warranted in my view for tendering judgment calls, context-based decisions, and managing edge cases, the automation of test case creation, execution, and identification of integration test cases looks to be streamlined.

There is a chance that AI can accelerate and provide insights to testers, but I am worried about AI’s ability to independently complete and approve all test cases soon. In my opinion, AI increasing its operational share while the tester performs evaluative checks to confirm the product quality’s alignment with user needs is a healthier approach in the short term.

2 Likes

This was noted . Thank you for the feedback

We have to adjust the process based on the situation, but in the new AI era, the exact adjustments we need to make are still questionable and have not yet been finalized.

@lisacrispin Thank you for the feedback .I may have asked this question at an early stage of the AI era. People are still in the experimental stage regarding the QA strategy, but I believe we will come a draft version of QA strategy within the next 6 to 9 months.

This is realistic feedback. I also believe AI can accelerate testing and provide valuable insights to testers.

1 Like

Nor will it ever be finalised. The use of any tool like AI will, if used correctly, always be context dependent.

AI is only a new era in terms of AI. The principles of software testing are basically epistemology and the philosophy of science, which are relatively stable in their application to the craft. Testing is still testing no matter the product or tools.

1 Like