🔨 What maintenance is required for automation frameworks?

Hello everyone,

I continue on my course writing journey and I was hoping to hear your thoughts and experiences on the following question:

What maintenance is required for automation frameworks?

I remember when I intially started research for the automation curriculum and I chatted with @pmichielsen about maintenance. Our perspectives were quite different. Maintenance for was all about updates and fixes, where Peet expanded that to include enchancements and features.

What are your thoughts? What does maintenance mean to you and what do you normally have to do?


Hello Mark,
In my opinion maintenance is the biggest cost when creating a new set of automation tests. Maintenance cost for end to end tests are for updating the framework often with new version of libraries, rewriting or refactoring parts that do not make sense as it grows and also analysis on failed tests because of no? reason( mostly I mean environmental).


I’d agree with @thanos_tzois - maintenance whether purely fixes and updates or including enhancements is going to be a bigger time-sink than creating the tests in the first place.

In no particular order I’d include:

  • Constant/near-constant refactoring. By this I mean that as the framework grows and expands, the team will identify problems that need to be fixed - everything from “Why do we have a utilities class that’s got just about everything and the kitchen sink in it? I can never find anything in there!” to “That’s the third set of almost-identical boilerplate code I’ve created this week. I think it’s time to extract it into a separate routine and pass it the parameters it needs.”
  • Continual expansion. When you start an automation effort, you’ll have a fairly small number of tests and most likely a relatively simple code base. But you’ll likely want to expand the application features covered by the automation (which could mean the API calls or it could mean the UI areas) which means adding in new classes and objects to handle the new functionality, new locators for UI work, and probably a fair chunk of refactoring as well.
  • Updates to handle changes to the application in test. Code changes. Different, better performing components get switched in to replace older, obsolete ones; requests from users lead to changes in the flow of the software through a module you’ve automated… The only software that doesn’t cause updates to automation is software that’s not being used.
  • Bug fixes. We write just as many bugs as any developer, or possible more. When we find them in our automation, we need to fix them. It could be that the logic we wrote doesn’t quite match what the application in test is doing, or that we’re not properly accounting for the network latency which is causing pages to render more slowly than we expected. In the context of an automation framework, those are bugs.
  • Support for new features. In the automation world, that means adding automation coverage to newly coded parts of the application in test, just as soon as the interface we’re testing is stable enough that we wouldn’t be constantly rewriting the automation.
  • Re-architecting the automation. This thankfully doesn’t happen often, but older, continuously used regression automation can accumulate a lot of kludges over the years. The effort and adjustment is a lot larger than a simple refactor, and can do things like adding an entirely new abstraction layer to the framework (been there, done that, got the scars to prove it), but it can be necessary.
  • Updates to handle changes to the automation tooling. You’ve just upgraded your versions of whatever, and it breaks some of the automation. That means it’s time to find each broken spot and change the automation so that everything works again. It might not even be labeled as a breaking change because it’s not all that uncommon for someone to use a quirk of the tooling which is later corrected (this is why my lead of the time would always say “Undocumented features are bugs. Never depend on them.”)

I guess I look at maintenance a bit like a developer - once version 1.0 of the automation is live, anything else you do with that code is maintenance.

1 Like

I have been maintaining a code base that I created last year, in my opinion, it is not easy, and thank God I used comments and naming conventions. This helped a lot but again what @thanos_tzois said is so true. I have revisited and changed a lot of the tests and their execution strategy over time.


This is what I was able to come up with:

  • Compliance/ Security Updates: Ensuring the framework adheres to new security protocols and compliance requirements to prevent vulnerabilities and legal issues.
  • Integration with New Tools and Systems: As the tech stack of the project evolves, the framework must be updated to integrate seamlessly with new tools and systems.
  • Performance: Tweaking the framework to reduce execution time and resource consumption.
  • UX Improvements: Enhancing the ux and a11y of the framework for its users, typically the test engineers, to improve productivity and ease of use.
  • Technical Debt Repayment: Addressing accumulated technical debt, eg legacy code refactoring or removing deprecated features.
  • Testing Strategy Changes: Modifying the framework to align with shifts in the testing strategy or focus, such as a move towards more API testing versus UI testing.

Refactoring: We might want to improve and reuse code by making it more flexible so that it fulfills our needs.
Bugs: Most of them have to do with flakiness for UI automation testing.
Integration to third party tools: Integration to a CI/CD tool, Test Case Management tool, cloud farming tool, etc.
Libraries: Keep our libraries up to date to have access to new features and bug fixes.
Changes on the system under test: New features might impact existing flows. Existing test script might need to be updated based on the new behavior.