I’ve put somewhat agree. Because I’ve been using it on a specific project for about 8 months now. Everything from helping me with testing architecture to monitoring and alerting in production. Analysis of AC, creating POM files, converting scripts, performance tests anything in the whole SDLC.
Was it perfect? No… As I mentioned before, I believe you become an “QA-AI Reviewer” like a Dev Team who does 80% code review and 20% adding his own stuff. This is exactly how it is.
I see people doing “disagree” but have you actually used it for several months? Not just “once” ? It helps me greatly with the boring stuff. It requires a mature environment to use it, because:
If you put shit in your AI-Agent, you can only get shit as an outcome
Compare it to writing a shit analysis, you’ll only get a shit product. It’s not the same damn thing, this does not make it a “product problem” but an “analysis problem”
Hence it’s not always an “AI problem” but an “information or prompting” problem.
People need to learn how to prompt and put in valuable information. I literally cannot see how one would disagree with the statement (based on my experience !!! ) even the smallest thing is “help” - making a list of things, yea you have to review it but you also have to write it yourself otherwise and come up with “everything”. Sometimes there are things in there that I wouldn’t have thought about so it helped
Don’t get me wrong here, sometimes the output is indeed not what I expected, I have only shared my good experiences here… but yea there are bad experiences (and I had to learn how to prompt and train my AI-Agents) also but overall if I look back I would say it helped me more then I have been frustrated. Q:AI in Testing will benefit all testing, QA and quality engineering professionals
So: Yes
And that’s why I choose “Somewhat agree” <3
But this is an implementation problem and not an AI problem.
With this reasoning, if I implement test automation badly, “all test automation will not benefit testing” and that’s not the question.
With this I agree, the rest I do not. Testers fighting bullshit is either having a badly trained AI-Agent or prompt engineering, yes there will be times where the answers are shit but in most cases, it’s just fine.
AI is indeed limited to what it learns and knows, that’s why you have to keep training and feeding it new information. And still it will go badly, it’s a never ending process but as I said it’s 80% reviewing and 20% writing yourself.