30 Days of Automation in Testing Day 25: Share what actions you take to make your automation maintainable


(Quang Le) #1

This was the hard question for my team when getting an automation project which is implemented by the old team. So we do many things to make it stable and maintainable.

  1. Define Definition of Done (DOD) when scripting a new test cases
  2. Define Coding Convention and follow strictly: comment hard methods, naming, methods…
  3. Refactor code: the good structure will be easy to maintain… (there are so many things in this step to make the framework and the test scripts be maintainable)
  4. Weekly code review
  5. Cross review code section

And there will be some more steps, they will depend on the project, the framework and the client in many cases


(Trung) #2

Below are some tips to make automation maintainable

  1. Design tips to minimize maintenance
  • Decide what to test before you decide how to test it.
  • Document your test cases.
  • Keep it simple.
  • Use naming standards.
  1. Coding tips to minimize maintenance
  • Use a modular structure: Keeping your test cases independent will not only make your tests easier to maintain, but will also allow you to take advantage of parallel or distributed execution.
  • Create automated tests that are resistant to UI changes
  • Group tests by functional area
  • Create reusable code modules
  • Separate test steps from test data.
  • Use source control.
  1. Execution tips to minimize maintenance
  • Ensure that your test environment is stable.
  • Use setup and teardown processes.
  • Fail fast: If there is a serious issue with the application that should stop testing, identify and report that issue immediately rather than allowing the test run to continue. Set reasonable timeout values to limit the time that your test spends searching for UI elements.
  • Fail only when necessary: Allow your entire test run to fail only when necessary. Stopping a test run after a single error potentially wastes time, and leaves you with no way of knowing whether the other test cases in the run would have succeeded
  • Isolate expected failures.
  • Take screenshots to provide detailed information that will assist in troubleshooting a failed test.

(Pablo) #3

For maintenance:

  • Incorporated the use of test data and keyword-driven format
  • Declared variables in a separate file and imported them into the test
  • Made variables descriptive
  • Made page objects reliable by using IDs or efficiently-labeled Xpaths
  • Incorporated Single-Responsibility Principle in tests and made functions less verbose
  • Kept UI checks to a minimum which increases test execution time
  • Used wait times sparingly

(rosiecorn) #4

Hi Quang,

Thanks for suggested tips, they are really good ones but i just one to add one point, It is data files

As everybody know, when testing we also face some issues related to out of date data, especial in API Testing
So how can we make them maintainable it reduce effort when flaky testing.
So i would like to suggest we should think a about approach to handle data test effectively and easy to change by automatically.

To do it:

Option 1: Inject some dynamic code to enable to gen dynamic data files for each test cases
Option 2: Build separate solution to handle data files for all of the test cases and it will be executed at the first step before running automated test scripts


(Darya ) #5

Some tips from my Team:

  1. Test code = product code (we use the same Coding Convention, Review, Refactoring)
  2. Plan work on automation for each required US, take into account labor costs, formulate DOD for such US.
  3. Constant feedback from developers. If method was changed we shoud know about it to modify our tests.
  4. Rotation of team members involved in automation.
  5. Documentation

(Jaime) #6

Page Object Model for UI Automated tests. It is by far the biggest upgrade you can make to the sustainability of UI Automation testing.


(David) #7

The following is nothing new to anyone who already does automation, but I just learned about it myself.

In the Selenium course I took, the instructor’s methodology used a Test Driven Development Approach

a) Write your tests
b) Test your test - make sure that it fails if some condition is not met (i.e. go in and muck with the web page you are testing during the test)
c) see what you can refactor
d) repeat as you add tests

Through this process anything that violates the Single responsibility principle or the DRY (don’t repeat yourself) principle gets refactored. This naturally fits in with the Page Object Model mentioned above, and the instructor spent a great deal of time making sure we understood object oriented concepts (in this case, in C#) before delving into writing tests.

The tests should be written clearly, so they read like English. i.e. page.fillOutFormAndSubmit() where page is your page object and FillOutFormandSubmit() is a method that you’ve created for that object.

The page factory model is, I believe, another abstraction of this, and something I’d like to get into next. There is an entire book about Design Patterns in Selenium that is on my wish list. :slight_smile:

-Dave K


(Kumar) #8
  1. Every feature that is developed should be tested with an mindset on how we are going to automate them
  2. Automate tests straight after developement
  3. Developers should be involved in testing both manual and automation
    4.Implementing specification by example principles.

(AMIT) #9

Below Actions help in maintaining tests

  1. Build a good Page Object Model.
  2. Having test functions that are simple and reuse. And accessible in any tests by keeping it in page object.
  3. Use Dynamic waiting not implicit, for the correct behavior / property of a test object.

(Heather) #10

From our friends on Twitter: