We’re continuing our work to create a curriculum focused on automation that is created with feedback from the testing community. We’ve already run a series of activities that have helped us identify a list of key tasks that helped us create this Job Profile .
We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:
Report Automation Results
We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below.
- Collect and store useful information as the automation is running (Screenshots, HAR files, log files etc.)
- Collate results from automation and send results to team via tooling (Email, dashboard, Slack, test case tools)
- Check regularly to confirm that green actually means passed and not hiding issues
- Analyse trends across reports (like test execution time) to discover issues across iterations
If there are failures:
- Confirm failures are real
- Write up details about failure for the team to review
What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?
What do you do when an automated test fails?