How do you quantify the time that automation is saving your team?

It’s often said that automation frees up testers to explore other areas of the application by automating the repetitive tasks that can be checked by a machine.

But, how do you quantify the time that automation is saving your team?

I’ve seen people say that automation saved them X amount of hours per day/week. Where do you even begin with measuring this?

5 Likes

I’ve been putting together KPIs around product realisation keeping things simple. Each time I create an “interesting” statistic I always ask myself “So what does anyone do with this information? How can we use this to improve?”

Whilst you can possible get the data from the manual test execution estimate if they automated tests have a direct link, to me this stat appears to be more about justifying the benefit of automation. I can’t see how knowing such information gives an opportunity to improve anything.

For me more important stats are measuring automated test growth rates, the split of the full regression pack between manual and automated and of course the coverage of the automated tests.

2 Likes

I can give a concrete example of this:

My previous employer produced specialized point-of-sale software that was sold world-wide - which meant it had to calculate taxes in a truly astonishing variety of situations and be compatible with multiple-currency situations as well as handle currency with different decimal requirements. In addition, there were two sales modules: the actual point of sale module, and an order module where people could phone in their orders for pickup or delivery.

To run basic tax regression on one module was a week of drudgery for three people, so about 120 hours, with a reasonably high chance of error due to all the usual human factors. Understandably, this didn’t happen often - although it should have accompanied every release (3x per year).

It took about 6 months to build the automated regression tests for that module, including the month spent verifying the results of every test. It was massively data-driven, to the extent that once it was done adding a new test was a matter of several lines of data plus a baseline update.

These tests ran once a week, and took about 24 hours to run the full set (about 5x as many tests as were in the manual regression). The “light” set of tests ran in about 8 hours and was still more than we were able to do with manual regression.

I’m not going to do the numbers, but it should be pretty obvious that we were saving time - there ceased to be a need to schedule the week of manual regression, we knew within 24 hours if any of the core tax calculations were broken, and within a week if any of the more esoteric calculations were broken.

This was a UI-based automation suite, because the software in question had been grandfathered from the original TurboPascal code into Delphi, but the UI was still heavily entwined with the business logic so unit testing wasn’t feasible. The automation was proving its value with the first run.

The calculation is more or less:

Amount of time the team spends on one manual regression run pre-automation - 120 hours

Time to create the automation - about 700 hours one time effort.

Automation run time - using the “lite” regression - 8 hours.
Time to analyze automation results - maximum 1/2 hour per run.

Once in place, the saved time is the amount of time of the manual run, minus analysis time - 120 - 0.5 - 119.5 hours per run.

Over the course of a week (running weekdays only) that’s almost 600 hours saved, which very nearly “pays back” the original automation effort. Over 2 weeks, the “lite” automation has effectively saved the equivalent of the time to automate the tax regression and 3 manual regression runs per year.

That’s the time saving calculation I’m familiar with, although I’m sure there are others. This is also an extremely clear-cut example. Most automation “time saving” questions aren’t nearly so clear.

5 Likes

I total agree with Kate.

Our release testing used to take 2 days which means we could not release every day.

With automation, now our release testing takes the couple of hours to test time bound scenarios. Stress of releasing has been greatly reduced, thanks to regression suites quick feedback.

Overall team confidence has increased.

3 Likes

While this is great, I would be careful not to limit my per release testing to just the automation checks, but also augment them with testing by humans in the areas of change.

@testervenkat - oh absolutely. The automation checks are to ensure that critical functionality is still working as expected.

Manual testing still happens, largely around the areas that have changed, and includes things like user experience, work flows, consistency checks (if the submit button is always the furthest right, why does this one form have a submit button on the left?), and all of the many things automated checks don’t do well.

The goal is to have the right balance of testing to minimize problems escaping to customers. What that balance is will differ from organization to organization, and from application to application.

1 Like