🤖 Day 5: Identify a case study on AI in testing and share your findings

Hi my fellow testers, for todays challenge I thought rather than search for a case study I would instead share my own experiences so far with trying to use AI tools within my workplace, I hope that’s ok.

Brief background on the case study

Back in October last year my workplace devised a learning week for us all where the challenge was to spend a week learning something that helped us develop individually and also could indirectly have a possible future benefit to the software we develop. I chose to research and try to use a couple of AI testing tools and see if they could help me in the challenges I face with test automation.

How was AI used in their testing?

I chose to focus on a couple of tools that advertised self-healing functionality as it is automated test suite maintenance where I can spend a lot of time updating or fixing tests that have broken due to UI changes or API changes.

What tool(s) or techniques did they leverage?

I initially looked into Katalon Studio but it turned out that their self-healing feature is only available in web applications and it is the tests for desktop applications that I spent the most time maintaining. I then looked into Ranorex as this supposedly had a self-healing feature that worked on desktop software and I also tried out Applitools Eyes and their feature to use AI to visually test websites.

What results did they achieve?

Ranorex - I created a simple test in an older version of our software and then tried to run it in the latest version where some UI elements had changed. It failed immediately to click on a changed button, I tried adjusting every setting I could that was related to the self-healing feature and re-running the test but nothing worked.

Katalon Studio - I created a test against the latest version of our website and then ran the test against an old version. It auto-generated 2 self-healing suggestions and the preview image it generated for the healed control looked like it was identifying the correct control, after approving the changes & re-running the test and it passed through the altered locators without issue. I found it wasn’t able to fix every UI change so had to do that manually although I could see it attempt to find different locators for it. However I also wanted to see if the self-healed changes were good new locators that wouldn’t always pass even when they shouldn’t, I changed the target website where these new locators shouldn’t pass but unfortunately they still did which suggests the new locators aren’t good ones and possibly will always pass.

Applitools Eyes - I found I had the most success here. I created some baseline images and then over a few days ran the tests and they correctly compared the screenshots and failed if they saw different UI elements if targeting a different website version and passed if the elements were identical to the original baseline.

What stood out or surprised you about this example?

I think generally I was surprised that apart from Applitools all of the other tools were a heck of a lot of effort to work out how to use their supposed self-healing features and even more effort to try and determine why they weren’t working. I was disappointed that I couldn’t get Ranorex to work on our desktop software as that is where I have to invest the most effort manually, I was impressed however with Applitools and the ease in which I could get some tests running and comparing screenshots.

How does it relate to your own context or AI aspirations?

My aspirations for this week were hoping to find some very cool self-healing AI tools that would ease the burden I have in maintaining my automated test suites. Applitools I think was a big success, the others not so much, at least back in October. I continue to keep an eye on their development in the hope that they will eventually work for me as they advertise.

25 Likes