Our team is actually starting up with automation this week, and I am think of the same as you. What will be the maintenance we need to do, and how will the prosess be.
My first thought has been that we need to follow up on all test results each day.
We also need some process of updating the test during development, but I am not sure how that process will look like.
Bugs - Automation is software so there will be bugs. Bug fixing is maintenance.
Parity - Changes to the app being tested can mean changes to the automation (changes in flows, new features, deleted features, etc.)
Infrastructure - New testbed OS versions, new devices and device OSs, DB upgrades, network changes, new tool versions, and security changes; any of these could cause maintenance effort.
Expiration dates - like food, automation has a shelf life. Automation can get stale if not maintained, but sometimes automation just doesnāt provide as much value as it used to. Sometimes, we have redundant automation. Sometimes, we can make our automation more efficient by combining into fewer discrete entities (thatās a slippery slope, though). We need to perform this kind of audit maintenance to ensure our code base is still providing sufficient value.
I agree with previous responses and I will add more thing:
Releases of new versions of libraries, or even new versions of the automation framework we are using. For example, updating our test scripts from Selenium 3 to Selenium 4.
Through maintenance, I understand: most of the work done in this product that is not new code for new automated flows/scenarios. Some things Iāve been doing lately:
update the framework applications/dependencies;
update the pipeline as the infrastructure changes;
update or create new configurations as the company Firewall is updated;
update the scenario as the test data changes/gets destroyed/refreshes;
update the credentials of the test users which expire;
update/recreate the product states for which the scenarios have been built;
refactor the code for simplification, reusability, and code quality;
updates of the framework due to changes in the used libraries;
updates of the locators/selectors in the main product code and the automation framework;
update in the reporting system: logs, emails, dashboards - that are useful;
debugging new and random failures;
checking and analyzing failures - pinpointing issues;
updating the automation code to attempt to fix failures;
report and debug with developersā potential main product code issues;
review timings and selectors that only sometimes work;
execute from time to time long multiple loops of the scenarios for the random stuff to fail sooner and debug/fix it;
reporting to the management and the team: with tickets, emails, 1to1;
creating, or updating documentation technical, of the coverage;
documenting links between automated checks and ātest cases/business cases/use cases/scenariosā;
adding more randomizers/fakers - path to navigate, actions to do, data to select and edit, inserted data;
delete code of automated flows that canāt be deterministic enough and do not provide much value;
rewrite the whole automation due to massive product changes, or because the previous owner left the company;
change automation code as product feature changes, or old bugs(for which an automation workaround was created) get fixed;
recreate/adapt the code to allow for: multiple user sessions, user rights/permissions, parallel execution, different environments, different platforms or versions;
reduce execution time - find unnecessary code and delete, optimize the use-case, reduce overlapping;
handover, training, presentation for other interested people;
Automated checks gain technical debt at a rate inversely proportional to that of the system under test. As such, my efforts have focused on getting the test frameworks right enough that the test cases change (or are added to) more often than the underlying framework. That said, itās still a constant battle, given the speed of development.