Hi all,
We’re continuing our work to create a curriculum focused on automation that is created with feedback from the testing community. We’ve already run a series of activities that have helped us identify a list of key tasks that helped us create this Job Profile .
We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:
Investigating Failed Automated Tests
We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below:
- Regularly check automated tests runs through whatever reporting is in place
- Discover which test has failed
- Analyze failure information including assertion message and additional captured information
- Review any recent code changes in the system under test
- Determine if the issue lies with the automate check or with the system
- Determine if the issue is caused by a difference in the test runner – tests that are very sensitive to timing may fail due to the speed of the test runner
- If it is a problem with the system:
- Carry out additional testing around the error
- Verify APIs are working
- Verify the test conditions
- App version?
- App install / setup?
- OS Updates?
- Capture details about the issue and report it to team
- Carry out additional testing around the error
- If it is a problem with automated check:
- Analyse further to see what the issue is and if it’s a repeated issue
- Determine if the automate should be fixed, disabled or removed
- Fix the automated check by either correcting errors or making it less flakey
- Rerun failing check multiple times to see if it now passes
- Report to the team that you have fixed the issue
What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?
What do you do when an automated test fails?