Self-healing test scripts

Hi all,

Thereā€™s so much of talk about self-healing test scripts,
that would automatically correct themselves based on
update to the source code. I wonder if the test scripts updates
would happen after approval from all the stakeholders,
or just because the developer says so.

How do the existing tools in the market handle this?

Thanks,
Venkat.

Where is this talk about this? It would be good to have more information.

Hi Rosie,

Absolutely! I first heard about these an year ago,
but couldnā€™t get to it because I am tied up with
my assignments.

For starters, a quick Google search on ā€˜self-healing test scriptsā€™
comes up with a good number of blogs and product pages!

Wondering if anyone has deployed them, and what they
feel about the concept.

Kind Regards.

From what Iā€™m reading itā€™s a bit of a sales term. It seems to refer to the idea of updating locators in existing scripts. So where a page object might encapsulate how to perform a coded action on paramaterised DOM objects a self-healing script can update those parameters via machine learning algorithms when the underlying web application is changed. So I change the name and location of a button and the machine learning algorithm updates the change in the check script.

Itā€™s advertised as ā€œArtificial Intelligenceā€ training itself to recognise if emergent check failures are to be ā€œexpectedā€ (problems in the check suite, not the application code). I donā€™t know if this then means that the AI will decide that itā€™s not a real failure and not present it to you any more and quietly fix it. Sounds like an attempt to reduce ā€œflakinessā€, but Iā€™ve always felt that flaky tests are a useful indicator of things like the health of your test suite, or even naming problems in app code. It seems to assume, like many tools of yesteryear, that coders are perfect beings (sometimes framed as ā€œthe tool works if you use it rightā€) and check suite code always looks like the tutorial page off the framework website.

This sounds incredibly limited, hugely over-engineered, bizarrely over-priced and one of the least exciting (but less sinister) things to ever come out of machine learning, but Iā€™m open to the idea of excitement if it can actually do more than that. I think the time saved will soon find itself in tool-created laziness and abstraction leaks instead. I think that those investing their time in making these tools are trying to fund future projects - to put themselves in a strong market position for machine learning tools and solutions in the future. Or theyā€™re cashing in on the machine learning thing before the cool wears off.

Itā€™s also a USP for vendor lock-in. The frameworks that seem to advertise it are also sub-framework UIs. For example a web UI written over Selenium (because we need another flawless, always-updated abstraction layer between us and our code). My guess is that theyā€™re selling what weā€™ve been tired of years ago - a way to make checks business readable and accessible to people with no coding skills so that we can replace reality with poorly-named abstraction and keep wages low. Sleep at night whilst not paying professionals. Until we meet some gnarly clever thing that some developer has done because theyā€™re super excited about this new technology that generates random element IDs or something, or when we realise that weā€™ve not put any automation hooks into our code so our friendly UI canā€™t even begin to understand our app changes.

It MUST be possible to engineer a system that compares check suite code and app code to generate ā€œlocator not foundā€ errors without touching a machine learning algorithm.

If the system can update crap regex thatā€™s vaguely pointing at relative DOM paths to get around some clanky AJAX/iframe nonsense that breaks when I do something outrageous like add another button in the same div then Iā€™ll pay attention again.