How do you “Investigate Failed Automated Tests”? - Automation in Testing Curriculum

Hi all,

We’re continuing our work to create a curriculum focused on automation that is created with feedback from the testing community. We’ve already run a series of activities that have helped us identify a list of key tasks that helped us create this Job Profile .

We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:

Investigating Failed Automated Tests

We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below:

  • Regularly check automated tests runs through whatever reporting is in place
  • Discover which test has failed
  • Analyze failure information including assertion message and additional captured information
  • Review any recent code changes in the system under test
  • Determine if the issue lies with the automate check or with the system
  • Determine if the issue is caused by a difference in the test runner – tests that are very sensitive to timing may fail due to the speed of the test runner
  • If it is a problem with the system:
    • Carry out additional testing around the error
      • Verify APIs are working
      • Verify the test conditions
        • App version?
        • App install / setup?
        • OS Updates?
    • Capture details about the issue and report it to team
  • If it is a problem with automated check:
    • Analyse further to see what the issue is and if it’s a repeated issue
    • Determine if the automate should be fixed, disabled or removed
    • Fix the automated check by either correcting errors or making it less flakey
    • Rerun failing check multiple times to see if it now passes
    • Report to the team that you have fixed the issue

What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?

What do you do when an automated test fails?


One thing I would do is build and run the failing versions locally and then, if the error was still there, debug it on my own machine. Clearly this wouldn’t be possible in a number of different systems.


If it’s automated tests written in a compiled language such as C# - have the tests even compiled? It might be that the interface to the code under test has changed in a way that stops the tests from compiling.

Has there been a change to the build pipeline that builds and runs the tests?

Has there been a change to the environment, e.g. a database connection string for a database needed by the automated tests has changed, and this change hasn’t made its way to the tests?

1 Like