@parwalrahul,
What I basically perceive right now is the proliferation of AI across every domain, with software testing being one.
Many among us are already using AI in our day-to-day workflow-from enabling quick automation, brainstorming test scenarios with AI’s help, to even generating code snippets. And that’s fine. Yet, here lies the hard part: Such an AI carries its inherent risks, which most often remain invisible until one, perhaps the tester, steps right on them.
These testers are compelled not only to use the tool but also question it; otherwise, we end up somewhat diminishing the quality yardstick in the name of raising it.
For me, upholding testing and quality standards in an AI-parented environment looks something like this:
I never take AI output for face value. AI output gets scrutinized the same way I would scrutinize a junior tester’s work: reading carefully, validating assumptions, and cross-checking against actual requirements.
I am looking out for uniqueness. AI tends to repeat the patterns. Hence, I consciously review whether the test cases, the code, or the insights feel “cookie-cutter”. If they do, I look deeper.
I reverse the whole conversation. I sometimes do not straightaway go asking the AI for solutions. Sometimes I ask it to challenge me with some questions. That way, I avoid spoon-feeding and get a few different viewpoints.
I involve humans again. Sometimes a 1:1 conversation with a company colleague helps me check my thinking on some very subjective matters: perception of risk, perception of the blind side, or questioning if AI outputs really are what they should be. Nothing can substitute for professional judgment.
I monitor usage. If I start sensing somebody is becoming a bit too dependent on AI, I ask them questions not to discourage them but to make sure that the standards do not get slipped just because of convenience.
The point is that AI is powerful, but it cannot serve as the auditor of its own output: we are. Testing standards are there as someone standing in between to prevent false confidence.
That is my way, and I would be interested in hearing your thoughts on how you balance AI assistance with the rigor that is demanded of our craft.