This was the hard question for my team when getting an automation project which is implemented by the old team. So we do many things to make it stable and maintainable.
Define Definition of Done (DOD) when scripting a new test cases
Define Coding Convention and follow strictly: comment hard methods, naming, methods…
Refactor code: the good structure will be easy to maintain… (there are so many things in this step to make the framework and the test scripts be maintainable)
Weekly code review
Cross review code section
And there will be some more steps, they will depend on the project, the framework and the client in many cases
Below are some tips to make automation maintainable
Design tips to minimize maintenance
Decide what to test before you decide how to test it.
Document your test cases.
Keep it simple.
Use naming standards.
Coding tips to minimize maintenance
Use a modular structure: Keeping your test cases independent will not only make your tests easier to maintain, but will also allow you to take advantage of parallel or distributed execution.
Create automated tests that are resistant to UI changes
Group tests by functional area
Create reusable code modules
Separate test steps from test data.
Use source control.
Execution tips to minimize maintenance
Ensure that your test environment is stable.
Use setup and teardown processes.
Fail fast: If there is a serious issue with the application that should stop testing, identify and report that issue immediately rather than allowing the test run to continue. Set reasonable timeout values to limit the time that your test spends searching for UI elements.
Fail only when necessary: Allow your entire test run to fail only when necessary. Stopping a test run after a single error potentially wastes time, and leaves you with no way of knowing whether the other test cases in the run would have succeeded
Isolate expected failures.
Take screenshots to provide detailed information that will assist in troubleshooting a failed test.
Thanks for suggested tips, they are really good ones but i just one to add one point, It is data files
As everybody know, when testing we also face some issues related to out of date data, especial in API Testing
So how can we make them maintainable it reduce effort when flaky testing.
So i would like to suggest we should think a about approach to handle data test effectively and easy to change by automatically.
To do it:
Option 1: Inject some dynamic code to enable to gen dynamic data files for each test cases
Option 2: Build separate solution to handle data files for all of the test cases and it will be executed at the first step before running automated test scripts
The following is nothing new to anyone who already does automation, but I just learned about it myself.
In the Selenium course I took, the instructor’s methodology used a Test Driven Development Approach
a) Write your tests
b) Test your test - make sure that it fails if some condition is not met (i.e. go in and muck with the web page you are testing during the test)
c) see what you can refactor
d) repeat as you add tests
Through this process anything that violates the Single responsibility principle or the DRY (don’t repeat yourself) principle gets refactored. This naturally fits in with the Page Object Model mentioned above, and the instructor spent a great deal of time making sure we understood object oriented concepts (in this case, in C#) before delving into writing tests.
The tests should be written clearly, so they read like English. i.e. page.fillOutFormAndSubmit() where page is your page object and FillOutFormandSubmit() is a method that you’ve created for that object.
The page factory model is, I believe, another abstraction of this, and something I’d like to get into next. There is an entire book about Design Patterns in Selenium that is on my wish list.