In the current AI-driven era, many organizations are focusing on optimizing resources by automating as much of the QA process as possible. I’d like to get feedback from the community on what key factors we should consider when reducing human resource dependency and increasing automation efficiency.
In our case, we manage the same product across multiple deployments (around six), each with some additional features. Continuous work such as CRs, CRBs, and upgrades are part of our sustainable release cycle — so we need to ensure that automation can handle these dynamic changes while maintaining quality and minimizing manual effort.
Use your humans for human strength things, they excel at testing as a learning activity, investigation, discovery, experiments, empathy and in particular when you have something in your product you currently do not know completely well.
You can reduce that over time by actually doing that activity of learning but that needs those pesky humans to get there.
Maybe change your goal from reducing human dependency to reducing the need for activities that they are really good at, that sort of means moving all risks into that known very well area.
Unfortunately AI is not that great at that yet but once you get there AI does work to machine strengths fairly well and that usually those known very well stuff.
When AI can deal more efficiently with those knowns it can empower an increase in the human strength areas and not reduce, given AI’s somewhat unpredictable nature this adds to that argument.
What you may find is your automation headcount can be reduced as AI should allow significant accelerators in this area, whether that’s the same people doing more, less people doing the same, handing tasks over to agents or even empowering non automaters to do reasonable automation (risky in my view.
Either way I’d keep Humans At the Helm. Change that goal though from reducing human dependency to what the actual goal is that may result in less of an need for human strength skills.
Really a good question as it falls well within the scope of present-day QA.
Determination of minimum size really depends on the age of testing in your automation ecosystem, thereby aging: How changes in testing are so parameterized? Artificial intelligence could increase the speed and reduce manual work in automation, but human QAs will still be needed for exploratory, usability, or risk-based testing.
Automation Coverage Vs Test Relevance - Automate test cases that remain steady and are repetitive first; for those that are dynamic, such as CRs and feature variations, ensure that your framework supports modular or self-healing automation.
Environment vs Deployment Complexity- Disclaiming that multiple deployments are very similar to one another is incorrect. Ensure a minimum of one tester per deployment stream, who is entrusted with release validation, even if there is an automation suite in place to test the application.
Change Management- AI can assist with the impact analysis, but a human would still have to interpret risks from Change Requests or CRBs.
Skill Mix- Could be a smaller team of highly skilled personnel rather than a large manual team-maintains automation, validates AI, and performs edge-case exploration.
Monitoring and Feedback Loop - Utilizing metrics from CI/CD pipelines, defect trends, and AI insights, there needs to be a periodical re-evaluation of the size of the team to find out if it remains adequate or turned insufficient. It is a matter that has really never been fixed as far as a “minimum number.” A lean team that is strongly domain-aware can handle a large surface area quite well, particularly if your automation framework and AI models are fine-tuned for evolving product features.
Thank you for your feedback. I believe this is an area where many people are still struggling to identify the right approach. It will take some time to reach a proper conclusion, as IT companies are still conducting trial runs.
In the past, we considered in agile that the ratio is 1 tester per 3-5 developers is a good ratio.
First, it depends on the methodology and processes in place.
A proper methodology and a good process it not a thing that can replace the QA or the testing.
We usually refer to the AI can enhancing the capabilities of an existing tester 1.5x.
In that sense, I would say that you can increase your dev team and still keep 1 QA, as this 1 person can do more with harnessing the AI power on his end.
Taking into consideration, the AI is not used as a Q-A protocol, but is really doing by it’s own part of the process the tester use to do manually, the tester is being the reviewer and adapting it to be correct in iterations.
I would need more details of the team structure and the way of work, product, in order to give a proper, reality-based answer.
What’s the logic in cutting QA if your developers are also harnessing AI and presumably pushing through more product? You might even expect devs to race ahead, since their AI tools are slightly more mature than QA AI tools. Asking as a tester who likes to test, rather than being a simple typist!
Thank you for your opinion. We have six product deployments — each deployment have the same core domain but includes some additional features. We regularly receive CRs, and at times, we also need to work on CRBs. The systems handle a high number of transactions.
Currently, our team consists of 2 QA engineers, 1 trainee, and around 10–11 developers.
Those are assumptions.
There is no certainty that the developer’s AI tools are more mature than QA AI tools.
It all depends on how you use it, what the process is in place.
I hear complaints about the tools being used from both sides.(As a QA Manager, from both developers and QAs).
The value of being a tester in this era is to know build the proper strategy and type the correct questions to the AI tool.