What maintenance do you have to carry out on automated checks?

Hi all,

Iā€™m seeking community thoughts on the following question for an upcoming course Iā€™m working on.

What maintenance do you have to carry out on automated checks?

Iā€™d love to hear about triggers for maintenance. What happens in your context that results in you having to maintain your checks.

I look forward to seeing what everyone shares.

4 Likes

Hi, Here are few point that comes top of my mind to carry out on automated checks

  1. Keep the automated checks up-to-date with changes in the application or system under test.
  2. Ensure that the testing environment is consistent and stable.
  3. Implement robust logging and reporting mechanisms in your automated checks
  4. When a test fails, promptly investigate the cause of the failure.
  5. Maintain comprehensive documentation for your automated checks. This might configurations, dependencies, and any specific instructions.
2 Likes

Our team is actually starting up with automation this week, and I am think of the same as you. What will be the maintenance we need to do, and how will the prosess be.

My first thought has been that we need to follow up on all test results each day.

We also need some process of updating the test during development, but I am not sure how that process will look like.

2 Likes

  • Bugs - Automation is software so there will be bugs. Bug fixing is maintenance.
  • Parity - Changes to the app being tested can mean changes to the automation (changes in flows, new features, deleted features, etc.)
  • Infrastructure - New testbed OS versions, new devices and device OSs, DB upgrades, network changes, new tool versions, and security changes; any of these could cause maintenance effort.
  • Results audits - If you arenā€™t looking at your results, is it even worth automating? You also need to look at your ā€œpassingā€ results: should they be passing, should they be failing, do your logs allow you to trust what the automation is doing.
  • Expiration dates - like food, automation has a shelf life. Automation can get stale if not maintained, but sometimes automation just doesnā€™t provide as much value as it used to. Sometimes, we have redundant automation. Sometimes, we can make our automation more efficient by combining into fewer discrete entities (thatā€™s a slippery slope, though). We need to perform this kind of audit maintenance to ensure our code base is still providing sufficient value.
3 Likes

I agree with previous responses and I will add more thing:

  • Releases of new versions of libraries, or even new versions of the automation framework we are using. For example, updating our test scripts from Selenium 3 to Selenium 4.
3 Likes

Through maintenance, I understand: most of the work done in this product that is not new code for new automated flows/scenarios. Some things Iā€™ve been doing lately:

  • update the framework applications/dependencies;
  • update the pipeline as the infrastructure changes;
  • update or create new configurations as the company Firewall is updated;
  • update the scenario as the test data changes/gets destroyed/refreshes;
  • update the credentials of the test users which expire;
  • update/recreate the product states for which the scenarios have been built;
  • refactor the code for simplification, reusability, and code quality;
  • updates of the framework due to changes in the used libraries;
  • updates of the locators/selectors in the main product code and the automation framework;
  • update in the reporting system: logs, emails, dashboards - that are useful;
  • debugging new and random failures;
  • checking and analyzing failures - pinpointing issues;
  • updating the automation code to attempt to fix failures;
  • report and debug with developersā€™ potential main product code issues;
  • review timings and selectors that only sometimes work;
  • execute from time to time long multiple loops of the scenarios for the random stuff to fail sooner and debug/fix it;
  • reporting to the management and the team: with tickets, emails, 1to1;
  • creating, or updating documentation technical, of the coverage;
  • documenting links between automated checks and ā€˜test cases/business cases/use cases/scenariosā€™;
  • adding more randomizers/fakers - path to navigate, actions to do, data to select and edit, inserted data;
  • delete code of automated flows that canā€™t be deterministic enough and do not provide much value;
  • rewrite the whole automation due to massive product changes, or because the previous owner left the company;
  • change automation code as product feature changes, or old bugs(for which an automation workaround was created) get fixed;
  • recreate/adapt the code to allow for: multiple user sessions, user rights/permissions, parallel execution, different environments, different platforms or versions;
  • reduce execution time - find unnecessary code and delete, optimize the use-case, reduce overlapping;
  • handover, training, presentation for other interested people;
3 Likes

Automated checks gain technical debt at a rate inversely proportional to that of the system under test. As such, my efforts have focused on getting the test frameworks right enough that the test cases change (or are added to) more often than the underlying framework. That said, itā€™s still a constant battle, given the speed of development.

3 Likes

Hereā€™s what I was able to come up with:

  • External Dependency Updates: Update tests for compatibility with changes in external services or APIs.
  • Performance Optimization: Identify and optimize performance bottlenecks in tests to save time and resources.
  • Security Updates: Enhance tests to cover new security vulnerabilities and protect against breaches.
  • Framework or Tool Upgrades: Refactor tests for compatibility with newer testing tools or frameworks.
  • Test Data Management: Update how test data is managed to ensure tests remain relevant.
  • Regulatory Compliance: Update tests to comply with new or changed regulatory requirements.
  • Test Environment Changes: Adjust tests for new test environment configurations or setups.
  • Redundancy Elimination: Remove redundant tests to streamline the test suite.
2 Likes