Test Automation - WHY?

Why do you run automated tests?

I’m interested in hearing about why people/businesses choose to invest in test automation. How do you decide what tests to automate?

Do you ever have difficulty proving that the investment in test automation was worth it? How do you demonstrate that value?

1 Like

I follow Angie Jones method. It works like gang busters. Takes all the guess work out the automation process.


Well, doing regression test manually can take days, with automation in place can take a few hours.
Likewise, some tests which require large amounts of repetitive task or data input can be exponentially speed up and made more reliable than doing it manually (repetitive tasks are prone to human error).

So those are two basic examples of advantages to automation.

But more than simply automating for the sake of automating, automation shines in the devops style of agile development process; where releases are done quick, maybe than more than once a day, and test have to be done quickly and repeatedly. In this case, automation is an essential aspect of the development process.


Thanks for sharing. It is a great method for deciding which tests to automate.

I posted this question on LinkedIn, Angie Jones also shared this talk :smile:


My pleasure. Can I ask, which linkedIn in page did you ask the question? That group sounds interesting.

Thanks Julio, brilliant answer. I like how you mention that it should be ‘more than simply automating for the sake of automating’.

Saving time is probably the main benefit we’ve got from test automation, and its easy to provide evidence of the time saved from test automation - for example, time required to run tests manually vs. time required to run tests using test automation.

However, what evidence would you provide to prove the value of automating repetitive tasks?

Julio - absolutely. My last employer had to be able to cope with tax systems from multiple nations as well as from all US States. That meant a positively evil amount of tax regression to make sure nothing broke.

Once the framework was in place, the tax regression took almost a month to put together, and had a run-time of 12 - 16 hours depending on what else was happening on the network at the time (we usually ran it over the weekend). The thing couldn’t be unit tested (old Pascal code grandfathered into Object Pascal, which in turn was grandfathered into Delphi), so the UI level automation was all we had.

Even when increasing network load and other problems forced the regression suite to be split so it could run overnight, the old warhorse kept chugging on, and kept on notifying us of any changes that messed with tax calculations.

If we’d had to verify those calculations manually, it would have driven the entire team insane - there were thousands of transactions that calculated every way of managing sales tax imaginable (and quite a few that shouldn’t be) including with and without foreign currency translations, currencies with different numbers of decimal places, different rounding rules… you name it.

There are other forms of “test automation” that aren’t quite as obvious. One that I engage in regularly at my current place is in the form of running two queries and exporting the output to text files. Then I run a file compare against the two. That’s it - but the queries mean I don’t have to deal with manually checking that every piece of data in the file was correctly imported because the query covers that for me.

I have a number of other database queries I use with other tools to make some of the tests I need to do easier. It’s not full test automation, but it’s certainly using automated tools to make my life easier.

Then there’s the CI pipeline I’ve set up to run my (currently minimal proof of concept) test automation suite. I arrive at work to an email telling me how my automation ran. If anything failed, I check it and fix whatever it was - most often something I can’t control although I have caught bugs with it.

Any level of automation can be helpful. How much you need and what you can do depends a lot on where you are, what you’re working with, and what the rest of the team is doing.

1 Like

Louise, I use a kind of decision tree based on weighted values when working out which cases benefit most from automation. You automate for many reasons all of which save you time. Read the literature out there, and then tune it to your context, some testers are testing web sites, others are testing APIs others test hardware and others are testing GUIs only and a bunch are testing legacy apps that have no baked in “testability”. Every context has vastly different approaches. If you work with devs to bake testability in, then things like feeding in a text file and reading one out get easier and build your coverage of positive test paths. So I see many kinds of automation, some automation is not just to do testing for “free”, but to also help you do semi-manual testing. I call this automation assisted checking. It’s a good start point.

1 Like

Why do you run automated tests?
I don’t. There’s no ‘automated test’. When automated thinking will be available we could call them that, until them testing for me is a technical empirical investigation…(Cem Kaner), a learning process through exploration and experimentation…(James Bach).
I use tools to help me with testing. I might implement some automated checks. I am expecting some kind of useful information from the scripts I build - they have to help me to find an issue.
I dislike using the automation for confirming something. As it scares me about the time invested in them, the false confidence they give, and the fact that while code was being written for them, none might have looked for bugs in the product. If there’s time and resources without affecting the issue digging, I would consider having something basic run there.

I’m interested in hearing about why people/businesses choose to invest in test automation.
People/businesses are usually stupid when it comes to understanding testing. They see it as a way to decrease their budget for testing by replacing human minds with machines.

How do you decide what tests to automate?
I can’t automate testing. I can use tools, it is based on context and capability.
I always ask myself, what’s the value of automating X? how critical is the issue that I might find with it? how much time, technical resources do I have? Is there another way?
Example: creating a script that automatically crawls APIs to get id’s from one interface, use them in another to query all offers and find missing information. A colleague of mine used something similar to create a pricing history graph.

Do you ever have difficulty proving that the investment in test automation was worth it? How do you demonstrate that value?
I saw different methods here being used:

  • bull****ing the managers, or people fooling themselves reading the internet without considering their context
  • and presenting the actual sincere outcome - saying the truth about the costs and the chances of that being useful and how the information gathered could be useful for stakeholders;

On the note of justifying the cost -
We have a service desk ticket system and any bug goes through the process of Business Analyst to Developer to Test Analyst to fix is released.

I based my data on the cost of 1 less ticket a week thanks to automation. Took estimated, hourly values and did a cost analysis. There are lots of cost samples for a bug found now versus later.

See - https://azevedorafaela.com/2018/04/27/what-is-the-cost-of-a-bug/

Another option I used was the time it took to complete a previous regression test, multiplied by the cost of test team.
3 weeks = 120 hours
3 testers = $150 a hour
= $18,00 a regression run.

This was a clear, proven sample for the senior management team.

Regardless of cost though, I feel it is about quality - if it improves quality and saves time, it should be done.

We use our automation suites to supplement regression testing and to ensure we find failures due to impact of changes on existing functionality early in the build cycle. We don’t automate everything and still perform a visual check prior to release (not all of our software lends itself to automation). It does pay off - bug rates in areas of our software that are subjected to our automated test cycles are far lower than in the areas that are only covered by manual testing but unfortunately we don’t have the resources to fully automate regression for our product range.

I’d pick and choose - we started one of our products with the aim to automate everything (even the smallest bug fix) but that just creates a test suite that takes days to run. Confirmation of key functionality works far better for us.

Most of what my team is testing is data in and data out. Very easy to have a definitive pass case and very easy to automate. So we automate the crap out of it. That investment gives us time for exploratory testing.

Other reasons: Gives developers fast and early feedback. Creates a more concrete definition of done. Allows us to explore any ambiguity in the requirements early.

1 Like

I think the myths of automated tests (or checks) have been slayed in the past few years. I don’t think anyone see those as a replacement for testing anymore, but that’s my experience, which is not massive.

I however, see automated tests as an “integrated unit test” meaning it’s just there to give me something to start with.

For example, having automated tests run before a build or release helps you ensure the quality of the code that gets to testing, before you even touched it. Nothing more annoying than deploying something to test just to find out it 404s when navigating to the page…

Another benefit is helping you with regression testing. As @julio said above, regression can take days! Having a little robotic help can only help. It also allows you to skip “checking” obvious paths and focus on the riskier, meatier stuff (which will most likely involve doing some of the paths already passed by the automation, allowing you to “double check” what has been tested).

The only problem with automation is that it requires quite a bit of maintenance, which requires time. I still haven’t gone through the exercise of measuring time saved x time spent automating, but I’m certain it’s worth the effort. :slight_smile:

Time saved vs automation & maintenance time depends a lot on what you’re testing. If it’s done right, the automation can give you assurance that your core feature set hasn’t been broken for a relatively small investment of time.

The more you run it, the more it will move itself towards being worth the effort. The easier it is to add new tests to it, the more it will move itself towards being worth the effort.

At my last workplace, adding new tests to the regression suites was usually a matter of updating several CSV files which held the test information, and between 10 - 20 lines of code. Then the next run, update the baselines and all was done.

Currently I’m aiming to build something similar, but it’s a long slow process, and once I have it set up, it will be years before it moves into the “costs less than it took to set up” side of the ledger. That said, each new part of the system I build for can easily acquire new tests because of the way I’m designing things.

My first major goal is to replace the manual regression I do each release with its automated equivalent, and have the automated version running every workday. After that, I’ll probably aim to fill out the impacts of the different configuration options.

1 Like

That’s exactly what I’m trying to do! :slight_smile:
My issue is in the “depends on the way you design it”. What I feel is that since automation is very particular of the system you are testing, unless you have an team of experienced testers to back you, it’s a lot of guess work.

Ok, not “guess”, but a lot of trial and error to getting the perfect balance between functionality and maintainability which everyone strives for.

Maybe that’s why there are so many testers that specialise in automation, once you get that mix right you are able to make a living just applying it!:thinking:

Wow, thank you for pointing me to this absolutely great talk!
I’ll think about how this relates to unit-testing (coming from a developer perspective) and might include a referenc in an upcoming talk.

1 Like

That or you’re deeply familiar with the system yourself. In my case I’m the only tester in the building, so I have to balance between trying to build enough automation to make the regression testing less of a grind, and actually doing all the testing that needs to happen.


The reason for using test automation can vary among people and businesses. Some might choose to invest as they don’t want to spend on human resources for testing their products, some may choose automation testing as they have very less time at hand for testing, some may even opt because they want to reduce human errors related to testing, etc. Whatever may be the case, the value added by automation testing is always huge. It saves both time and money for people and businesses.

Performing a SWOT analysis for one’s businesses and goals can help one understand if they should go for automation testing or not. However, with deepening focus on new technologies and digital revolution, test automation has become the new normal for the QA community.

It seems that it is not possible to thrive without automating one’s testing needs.

1 Like

I think we do need to still at times think about automation as just a tool. Even manual tests use tools, and the fully automated test systems are merely high end versions of those tools that deliver a kind of verdict.

Just like the highly effective manual tester, will look for better tools to work faster and dig deeper. Who will do things like pre-fabricate a fully populated data on a system for example, automation becomes a time saver. It’s not a goal, goals are metrics, and we know that metrics are often poorly chosen. I like to think of automation as a way to buy me more time to do exploratory test “runs”.


Just to reiterate again, the video Angie Jones ‘determining what to automate’ is very helpful. Much appreciated.