I can give a concrete example of this:
My previous employer produced specialized point-of-sale software that was sold world-wide - which meant it had to calculate taxes in a truly astonishing variety of situations and be compatible with multiple-currency situations as well as handle currency with different decimal requirements. In addition, there were two sales modules: the actual point of sale module, and an order module where people could phone in their orders for pickup or delivery.
To run basic tax regression on one module was a week of drudgery for three people, so about 120 hours, with a reasonably high chance of error due to all the usual human factors. Understandably, this didn’t happen often - although it should have accompanied every release (3x per year).
It took about 6 months to build the automated regression tests for that module, including the month spent verifying the results of every test. It was massively data-driven, to the extent that once it was done adding a new test was a matter of several lines of data plus a baseline update.
These tests ran once a week, and took about 24 hours to run the full set (about 5x as many tests as were in the manual regression). The “light” set of tests ran in about 8 hours and was still more than we were able to do with manual regression.
I’m not going to do the numbers, but it should be pretty obvious that we were saving time - there ceased to be a need to schedule the week of manual regression, we knew within 24 hours if any of the core tax calculations were broken, and within a week if any of the more esoteric calculations were broken.
This was a UI-based automation suite, because the software in question had been grandfathered from the original TurboPascal code into Delphi, but the UI was still heavily entwined with the business logic so unit testing wasn’t feasible. The automation was proving its value with the first run.
The calculation is more or less:
Amount of time the team spends on one manual regression run pre-automation - 120 hours
Time to create the automation - about 700 hours one time effort.
Automation run time - using the “lite” regression - 8 hours.
Time to analyze automation results - maximum 1/2 hour per run.
Once in place, the saved time is the amount of time of the manual run, minus analysis time - 120 - 0.5 - 119.5 hours per run.
Over the course of a week (running weekdays only) that’s almost 600 hours saved, which very nearly “pays back” the original automation effort. Over 2 weeks, the “lite” automation has effectively saved the equivalent of the time to automate the tax regression and 3 manual regression runs per year.
That’s the time saving calculation I’m familiar with, although I’m sure there are others. This is also an extremely clear-cut example. Most automation “time saving” questions aren’t nearly so clear.