I think you highlight some of the core challenges as a whole, what problems are you trying to solve with AI.
“What does your team actually want — better tools to write tests, or reliable test
code that just works? If someone handed you deterministic, maintainable scenario
scripts covering your critical flows, would that solve the problem? Or is the process
of writing tests itself where the value lies?”
For me none of that would solve my testing challenges, useful yes but its focused on known risks so say 10% of what a hands on tester focuses on, their strengths lie in the unknowns. Most apps are still for this point at human centric and as such have never really been deterministic even when guardrails are in place.
I added this description of testing to another thread recently which may help explain why my goals could be very different.
“Testing that highly technical tool loving activity that emphasises learning, discovery, investigation and experimentation into product risk. The one that embraces the currently unknowns, finds comfort in ambiguity, nuances, empathy and real world context. That takes a holistic whole lifecycle view of testing and applies it from day one.”
@probe_runner What you may want from AI is likely very different from I, the tools I am looking for are of data and information based, give me more visibility on what’s happening as I test, maybe I miss some odd api responses as I test, is there something in logs that give me more insight, what experiments can I do next.
An automation engineer for example may have goals much closer to the things you are suggesting but potentially a lot less so for a hands on tester. Its really important and of high value you raise this and its a great question, “what does your team actually want from AI?”. Its not going to be the same for all testers and your points are very valid, in particular token use in agentic will be interesting.