With the rise of AI-powered no-code / low-code automation tools, many of them rely heavily on UI-based element identification (visual locators, AI snapshots, DOM heuristics, etc.) and claim to be self-healing.
I started my automation journey with Selenium (Java) and currently work with Playwright (JavaScript), where I primarily use IDs, classes, and stable selectors. From my experience, UI can change frequently, but well-designed IDs and attributes rarely change, making code-based automation more predictable and debuggable.
I want to adopt modern AI tools because QA is clearly evolving in that direction, but I also want to follow best engineering practices and build reliable, maintainable automation.
So my question is: Which approach is better in the long run — traditional code-based automation or AI-driven no-code UI automation tools? And how should a QA engineer decide when to use which?
I caveat my views here that I have strong beliefs UI automation should be very light so I would not regard myself as a full on automation engineer on that front, never found the need to be but those with deeper automation specialisations may be in a better place to advise on this.
For now I’d generally lean more towards code based automation taking advantage of coding AI tools as an accelerator in the same way developers use it. You need access to the product code though, you can add ID’s to the source code with a couple of prompts alongside getting a fast-track setup in your preferred model, say POM for example again with a couple of prompts. You still need the basic code skills but AI has accelerated this approach. Basing tests straight from code lean towards execution as a mechanical activity, fast to create, heals based on actual code but still needs oversight. It won’t know what is not there, works as coded is a risk if by any chance you did this fairly blindy, and you need a way to get requirement intent factored in and also a way to effectively check how good your tests actually are. Mutation tools tend to work at this level to for that goodness check.
UI tools with mcp’s driving the browser which that to having a layer of behavioural focus, not so much as a pro test designer but as a user would experience the site. The coverage will be different as a result but you can feed it oracles and hueristics to leverage for the AI to leverage from but still in my view a fast tracked basic maybe shallow coverage at UI layer. I found the actual browsing for test creation a bit slow but that could have been setup. How to feed it requirement intent is a bit more complex. Healing should not be used blindly, understand and accept proposed changes though some may argue that defeats the purpose. Goodness of tests may be a bit harder to check with tools like mutant coverage, I’ve not seen AI agents designed with this in mind for the observed behaviour approach. It can also support the throw away after a few weeks and create from scratch monthly approach.
A combination of the mechanical code based and observed behaviour coverage may be a different option.
A key question is whether you have access to product code, this gives more options. The behavioural approach only may only be the better option if you do not have access.
Also consider if this a full time role, for me I do not want to be spending more than a few hours a week on automation so light and leveraging from AI tools makes sense. Those doing full time automation will have different goals and I suspect it will change the argument.
Alongside every advancement there may be a level of self preservation and tool selling bias, be aware of that both for yourself and from the advice of others on this front.
I’m still leaning towards product code access and code based for now unless there is a good reason for not doing that.
I have experience with choosing tools, automation or not.
I would go with AI no code tools with one limitation: You can adapt the code they write after it’s written.
I have experience with those AI gen code tools, imagine there is an upgrade with the tool or AI and it broke your code, you lose control over your automation code-base for who knows how much time.
This is a very predictable situation, as the AI tools are evolving a lot and the updates are frequent.
Therefore, code that is out of my control it’s a NO for me.
Playwright has it’s MCPs so I would just choose a tool that is strong enough, well known and also embeds AI into it’s core but depended on SUT and skill in coding in order to give a good advice, however the principles are the same for me. ( QA Manager with 6 years of exp.)
Long run, code-based automation is probably going to be more robust because you can see what is actually happening in the code. This lets you read the code and learn from it, even if the actual tool changes, you can apply your understanding of the code to new tools. You can also make use of migration tools if the new tool has one.
You also run into issues with no-code tools in that if they close up shop or you need to move tools, you can’t really export those (unless they expose the code underneath somehow).
The only time I would use a no-code approach (AI or not) is if I knew it was a very short term project and I know the tests won’t be around for long (maybe you’re testing a proof of concept?)
Note that in this case, I don’t think “AI” has much to do with my choices. You could have an automation suite that leverages AI or a no-code suite that doesn’t use AI and my answers wouldn’t change substantially.