How do you create reliable automated tests?

What do stakeholders want? Reliable automated tests (aka checks)

When do they want it? Now!

Part of our mission at Ministry of Testing is to create discussions, change and learning opportunities around software testing things that matter. This one is about highlighting how software testers think about reliable automated tests (or checks).

We put the question out on LinkedIn and Twitter and gathered responses below.

We hope you find it a useful source of inspiration and look forward to more input below from the wider community too!

How do you create reliable automated tests?

By skipping the UI, if you can. - Amitsingh Kshatriya

Use structured text to drive your tests. Repeated actions and initializing code should be moved into a function library for that test suite. Generic test code that’s repeated should be moved into the test framework. Refactor your test framework as often as needed. Be aware of testing asynchronous code; Avoid hard sleeps. Try to test one thing at a time if you have the luxury of time. Remove tests that add no value. UI test are sometimes best done manually. - Robert Sims

Prioritize important test scenarios to be included in automation suite. Reuse components, create structured and single purpose test scripts for eg a test should consist of -

  1. Setup
  2. Actions
  3. Validations

Compose complex tests from simple steps.
Avoid using Thread.sleep.Most important to write independent and isloated tests to minimize failures. - Sonali Burghate

Make sure the tests are consistently repeatable without odd failures, i.e not flaky (we are testing, not making milktart). Check the code coverage in terms of what the UI and integration tests cover in order to reliably ensure (at the very least at a functional level) the requirements for features are met.

Then expand on them. - Rosco Adams

The key to reliable tests is finding out which scenarios are business critical. That will require getting business, test and Dev to sit down and objectively assess what should be tested based on business needs. Theses tests need to simple and save time in comparison to doing them manually. - Jacqueline George


  1. Arrange all necessary preconditions and inputs.
  2. Act on the object or method under test.
  3. Assert that the expected results have occurred.
    - Martin Krangove

Defining critical and common customer workflows first and automating through those workflows.
Focusing on areas not likely to change (for regression).
Unit Tests are Automated Tests!
- Robert Rankin

Minimising variables; controlling the test environment; ensuring clean builds and stage resets/teardowns. - Shannon McCullough

Mock out your data, and manually test your journeys before automating them. - Oliver Martin-Hirsch

Know the inputs, the functionality being tested, and being able to repeat it over and over. Only test the highest risk and highest expense to the company otherwise you are wasting time. - Charlene Smith

Make sure we know what we want to automate and why, make the up to date and not broken as part of daily routines, do code reviews - handle the broken test investigations as a team (even if we have automation engineers on the team).

As well as - not being afraid of getting rid of an old test if we have a better version in mind and in the code that covers more of the aspects we were interested in the old one. - Guna Petrova

Explore, Capture, Review, Update. - Ben Dowen

Check the intent, not the implementation, of the system. - George Dinwiddie

Define test pyramid, split into API and UI tests, discuss how to improve testability of AUT with developer, define locator strategy in case of UI tests and make sure automate only those test cases which are critical and have high ROI - Sunny Sachdeva

Treating automation code exactly the same as production code, and holding it to the same metrics. You wouldn’t ship a website that was only 98% reliable, so why would you ship automation that way? - Melissa Benua

I always follow these steps:
1- Write a test that passes
2- Change the assertion and watch it fail
3- Modify and watch it pass again
- Francisco Moreno

OIA - Observation, Impact, Action - Benjamin Bischoff

I will start by educating the developers of how tests are important therefore they can build proper DOM - Anas Fitiani

  1. Roll your own page sync code.
  2. Write in modular design to fight maintenence.
  3. Follow all the other stuff in this post…
    - Paul Grossman

Having a good synchronisation strategy. Ensuring that the elements that we want to interact with are actually visible and available to test/check before we execute the test. - Coops

Creating the minimum (and VERY SIMPLE) to keep the application tested, so this way the team can easily maintain the automated tests up to date and also, reliable. - Thiago Grespi

Automate the heck out of your api stack. Only automate the UI when you absolutely have to! - Maryanne Sweat

Primarily unit tests. Other automated tests use mocks for dependencies. - Jeff Morgan

Without UI tests - Nicolas Canseco

My short and simple answer. State management. The most important part of automated testing IMO. And the part little is written about. But also the part that is the most contextual.
If you can’t control the state of your application at the start, think things like data, feature flags, envs, no matter how realiable the execution of the test is (UI clicks, API calls etc) they will always be unrealiable. - Richard Bradshaw

Observation. In the vast majority of cases, we don’t translate what the human did in to what we ask the computer to do, then moan when the computer “isn’t behaving correctly”. Tool bugs aside, these tools will only ever do what you tell them to do. If they aren’t told to check something, or wait for something, they won’t. So truly understanding what you intend to automate, by observing your own behaviour, the systems and the tools, will hepl with reliability. - Richard Bradshaw

understand your stack: tools, application under test, test environment, network etc.
All of these will have “quirks” you need to understand them to predict and adapt to issues. - Duncan Sangster

understand the best level at which automation could be ran- EG RE stateful stuff, unit testing saga implementation details doesn’t give any particularly valuable results, compared to integrating a few things, then starting with a specific state and observing/asserting on the end state after all your actions have resolved (< pedantry>for reliable automated checks anyway :wink: < /pedantry>) - Cian McCormack

Especially true via the FE, have a good understanding of what should and shouldn’t be automated. I’ve seen so many teams struggling through automating next to impossible scenarios, when there was low hanging fruit right there.
Bonus point: How do you fail at automated testing? The most common pattern I’ve seen is manual testers throw scripted tests over the wall at the test automation team. This community is normally pretty passionate, but even many QA managers don’t care about their craft. Of the about 50 clients I work with a year, which are mostly very large companies, I’d say about 70% follow this pattern, and I’ve yet to see it work well. - Jeff Poulin

I think the biggest thing people mistake is that a failing test isn’t necessarily unreliable. It’s often failing because someone changed a locator, in which case it SHOULD fail. - James Farrier

Make sure your tests are isolated from one another. If they set up data, tear it down afterwards. If they change the application state (did you log in as a user? did you navigate to a specific page?), change it back afterwards. - Jeri Levine


Stakeholders don’t understand what they say.
Try to open a series of conversations with them so you’re preparing the grounds and make them be specific about that ‘reliability’ and what are they prepared to invest to get it.
To help them you could map something basic like:

  • possible automation options: e.g.: logs parsing, environment setup, predefined db queries, predefined samples of different APIs requests, regression checks, investigation/experimentation scripts, API schema verifiers, etc…
  • scope for each - what exactly are you having in mind;
  • benefits: how can it help you find a problem, or make it efficient in looking for problems
  • costs in each case;