Is the ROI of automated UI testing enough?

I am building a web site using vanilla web components. Support for component testing of this tech is absent from tools like cypress and playwright. Adding such support feels like a project of its own, which will ultimately be a huge distraction from what I want to actually do.

This got me thinking about the value of automated UI testing. I have heard people advocate against it, in favour of only manual testing. Storybook supports the tech I am using, for example.

What are your thoughts on the value of automated testing of the front end of web sites and web apps?

1 Like

Personally I like combining/mixing UI and API testing to test integrations and workflows—using API’s and/or direct database queries to set things up for what I’m actually interested in testing via the UI, but also using the UI to check/assert on the software state after performing API actions, and vice versa.

One real-world example I keep coming back to is a system where one website allowed you to set up some operations, then you would call an API from another device to retrieve and execute those operations. The expectation was that if you cancelled the batch of operations from the other device before starting them, they would return to a ready-to-retrieve state on the first system, but that kind of automated workflow testing revealed a bug where the data didn’t round-trip correctly.

2 Likes

There was another thread here on a similar topic regarding the ROI of UI automation.

There did not seem to be a single response that could clearly communicate a strong ROI for UI layer automation, even the vendors did not chip in unless they used fallacies of arguments around mythical cost or effort savings.

The thing is though, there is an ROI but it is just hard to measure.

Talking with someone who has around 16 variations of a very similar app but content and UI varies with each one, that’s a lot of UI layer regression risk and once implemented the coverage was catching important things in a timely fashion that likely would have been missed otherwise.

For me one of the key ROI’s is whether it catches important things quickly that would have been missed otherwise, often you do not know that until six months after the fact.

The rapid feedback loops at UI layer and coverage of some end to end flows also add value but it remains hard to put a number on that value.

It remains important though to recognise that automation offers very different value from hands on technical risk based testing aka “manual”, automation favours the known risks, the pass and fails, the complex, the big data, the speed etc etc all of which have some value in covering.

Don’t compare it to the hands on side though as that tends to focus on the unknowns, the discovery, investigation and experimentation of risk, its so different goals and value to the extent that anyone using the comparison to justify UI automation ROI was likely doing testing wrong in the first place.

Bottom line it does have value but it remains hard to justify the ROI unless you are doing it after the fact or have a good grasp on value of things like rapid feedback loops and extended coverage it can bring.

1 Like

It depends also on what one want to automate, eg an API vs a Web Frontend. And AI will supposedly take the cost down to develop and maintain an automation? Personally, automation does not really rock my boat, but no Senior level programming does, thats why I became a tester in the first case. More a philosophers brain than a math brain…

All these tools focused on frontend test automation…

But why?

Is it because frontend devs are trying to be too clever and making UI code complex? Is it the fault of the ever-changing JS framework and SPA landscape? Why not have all the business rules, logic and complexity in the backend?

Its a common issue that a lot of UI coverage is not focused on testing the UI but on testing the many other things that tend to be better covered elsewhere in the stack.

Its important to ask whether its a UI risk, that’s usually what a UI test should cover.

A good way to tell is whether or not its actually a UI fix when it finds something wrong, if its finding things that require a lot of backend, business rules or logic fixes perhaps these are better covered elsewhere.

The e2e flows are also something that this layer can sometimes cover better than other layers particularly with 3rd party interactions, this can get a bit grey but I’d tend to keep this light. This though can be designed for coverage beyond the UI.

How does your view render, is everything visible that should be visible, are they responsive, do the views navigate correctly etc, actual UI issues.

Yesterday I discovered two issues on different apps.

Login issue, call and response were correct, backend successfully recognised the login, UI was stuck on login view as it was not handling the response correctly.

Double tap on button issue, corrupted authentication on front end. This sort of thing is usually both UI and backend issue but a UI test could pick up on the front end allowing double taps when it should be ignoring the second.

I have though seen heavy UI tests that cover everything and the lots of waste, high maintenance and costs that go with it, purely because automation was solely allocated to low code testers and they only had the skills to do this at UI layer as its what they know.

1 Like

We have automation for API as well as for UI running on our CI server.
While the API automation covers business logic, the UI automation is meant to cover the UI itself and its integration with the server.
We do not cover business logic in our UI automation.

I have done that myself in the past and see it still often happening.

The opinions you quote have some problems for me.
Automation is only a tool and its good at repetition. Its good for regression testing and sadly only a few people use UI automation to speed their explorative testing. (And some people think explorative testing isn’t a big deal, while its the core to me.)
I once had to test something in the payment process of a web shop. I could use every order only once and had then to again put things into the casket, got to the checkout etc. . For this I used an automated, parameterized, test case, which executed all the steps until the beginn of the payment.

About regression: It depends. Sometimes its worth to automate some things, sometimes its less effort to execute some things by hand.
UI automation can become a maintenance hell when done excessively, it needs to be well maintained (e.g. not using it as the single automation you have).

I made a meme picture to show the difference of how humans and computers perceive GUIs: Perception of GUIs - a human vs computer meme - EnterEsc

Explorative testing is the main thing I’m looking at, to clear out a local db to various degrees then reload in the right order. Takes a while to get a new case working but the ROI is reduced tedium.

1 Like

ROI part depends on the project, what parts of the project are automated, and how many man-hours have been saved with that automation, whether the tool was free or paid because some companies use tools like Katalon or test rigor, so these tools are paid and it will also impact ROI.

Companies usually prefer tools like Selenium or Playwright for automation because they are free and support multiple languages, but these tools require automation testers for writing scripts as compared to AI-based tools which are either Low code-based or have strong play & record buttons that automatically generate scripts through recording.

So it will be difficult to judge the ROI from a single tool perspective or single project perspective, also everything has pros and cons, automation testing has some edge over manual testing but it also has some cons so those things also need to be taken into consideration while calculating ROI.

1 Like

I’d be very wary of using hours saved as part of the ROI if its actual test execution hours saved.

It sometimes follows the path of start by doing something wrong or highly inefficient then do it more efficiently and the ROI is based on that saving.

Its a false roi on that basis, automation should be able to justify itself in its own right, often starting from the assumption point there is no testing at all currently so what value will automation add gives a much clearer picture.

The consideration of when to use automation is different though, if you have mundane and repeatable then likely go straight to automation.

The time saved aspect though does apply when its the other activities around testing it is used for, setting up an environment, creating data but I guess the same argument could be applied, do it straight with automation and dont start with a potentially inefficient approach.

1 Like

For me, only for small sized apps which are in maintenance. Large apps that still change often and have a lot of spaghetti code are a no go :smiley:

I’m using selenium for automation of routine clearing and replacing the db, while developing. This seems ok until I start having to debug the tests, because (with selenium) certain clicks that are perfectly ok manual, cause bugs when automated.

1 Like