Thereās so much of talk about self-healing test scripts,
that would automatically correct themselves based on
update to the source code. I wonder if the test scripts updates
would happen after approval from all the stakeholders,
or just because the developer says so.
How do the existing tools in the market handle this?
From what Iām reading itās a bit of a sales term. It seems to refer to the idea of updating locators in existing scripts. So where a page object might encapsulate how to perform a coded action on paramaterised DOM objects a self-healing script can update those parameters via machine learning algorithms when the underlying web application is changed. So I change the name and location of a button and the machine learning algorithm updates the change in the check script.
Itās advertised as āArtificial Intelligenceā training itself to recognise if emergent check failures are to be āexpectedā (problems in the check suite, not the application code). I donāt know if this then means that the AI will decide that itās not a real failure and not present it to you any more and quietly fix it. Sounds like an attempt to reduce āflakinessā, but Iāve always felt that flaky tests are a useful indicator of things like the health of your test suite, or even naming problems in app code. It seems to assume, like many tools of yesteryear, that coders are perfect beings (sometimes framed as āthe tool works if you use it rightā) and check suite code always looks like the tutorial page off the framework website.
This sounds incredibly limited, hugely over-engineered, bizarrely over-priced and one of the least exciting (but less sinister) things to ever come out of machine learning, but Iām open to the idea of excitement if it can actually do more than that. I think the time saved will soon find itself in tool-created laziness and abstraction leaks instead. I think that those investing their time in making these tools are trying to fund future projects - to put themselves in a strong market position for machine learning tools and solutions in the future. Or theyāre cashing in on the machine learning thing before the cool wears off.
Itās also a USP for vendor lock-in. The frameworks that seem to advertise it are also sub-framework UIs. For example a web UI written over Selenium (because we need another flawless, always-updated abstraction layer between us and our code). My guess is that theyāre selling what weāve been tired of years ago - a way to make checks business readable and accessible to people with no coding skills so that we can replace reality with poorly-named abstraction and keep wages low. Sleep at night whilst not paying professionals. Until we meet some gnarly clever thing that some developer has done because theyāre super excited about this new technology that generates random element IDs or something, or when we realise that weāve not put any automation hooks into our code so our friendly UI canāt even begin to understand our app changes.
It MUST be possible to engineer a system that compares check suite code and app code to generate ālocator not foundā errors without touching a machine learning algorithm.
If the system can update crap regex thatās vaguely pointing at relative DOM paths to get around some clanky AJAX/iframe nonsense that breaks when I do something outrageous like add another button in the same div then Iāll pay attention again.