How do you "Maintain Automated Tests?" - Automation in Testing Curriculum

Hi all,

We’re continuing our work to create a curriculum focused on automation that is created with feedback from the testing community. We’ve already run a series of activities that have helped us identify a list of key tasks that helped us create this Job Profile .

We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:

Maintain automated tests

We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below.

  • Listen and react to situations that indicate maintenance is required. Sources include:
    • Conversations
    • Other testing activities
    • Failing automation
  • Create clear code standards eg - add comments about the test data etc…
  • Carry out maintenance
    • If data:
      • Use, or implement new methods to achieve the required data.
    • If product changes detected:
      • Identify if the course of the change. If the product has changed correctly, update the steps and assertion of the test \ add any new test cases
      • If the change is a bug in the product, manage the bug as per your teams approach
    • If the framework has problems
      • Investigate the framework failure an change/update libraries as required
    • If due to test bleed
      • Identify the test that has left the environment in a bad state
      • Fix the issue with the test
    • If due to Test Environment issues
      • Identify the environment issue
      • Rebuild/fix the environmental issues
      • Maintaining the test environment to be up to date with the latest application change set.
    • If due to test flakiness
      • Identify the route cause of the flakiness
        • If the system, log an issue
        • If the test, try and fix the root cause
        • If the test, consider the value of the test and maybe delete it.
    • Test to make sure no other automated tests are failing due to the changes
  • Re-evaluate all your tests to make sure they add value and delete the ones that don’t
  • Analyze the execution time to see if it’s stable or trending up.
    • If trending up, identify the biggest increases and try to improve those tests.
    • If trending down, give a shout out to the people that contributed to the trend
  • Testing Debt such as written a test on the UI, but we can now move it to another layer

What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?

What do you do when an automated test fails?

2 Likes

Years ago I once pair programmed with a developer using TDD. This lead to the following way using TDD or Test Driven Development for test automation.

Basically I wrote an automated test scenario. The Refactor step of TDD urged me to look at the quality of the code. E.g. DRY or Don’t Repeat Yourself is a good heuristic, but it can have serious drawbacks. See my blog post serie starting with:

While I was writing an automated test scenario, I would execute the other scenarios on a regular basis. Sometimes a change in one test scenario could affect other scenarios. When I completed one test scenario, I was sure, that the other scenarios were still useful. Alsp I would get an early warning, if I accidently introduced a bug.