Seeking Insights on Maintaining Tests

I would greatly appreciate your input on the following questions:

  • What sort of testing are you maintaining?
  • What are the challenges in maintaining testing?
  • What testing debt are you carrying?

Looking forward to hearing your experiences and insights!


@mwinteringham If you are talking about automated test maintenance, you should check out since this company solely focuses on fixing test maintenance problems. Disclaimer: I am associated with testRigor.

1 Like

I don’t know if I would call the things I maintain, maintaining testing, but here is a go

Small Team(1 SDET + 1/2 Manager) - maintain API / UI E2E automation
8 Developers across 2 teams - maintain unit tests
Devs rely on our E2E automation and are able to deploy to production without having to go through the Quality team, though we typically are involved in changes that are risky.

What I do maintain:

  • Documentation on how to test things that aren’t straightforward to test (i.e. How to create test data for a specific test using a tool that we built or a link to helpful SQL queries joins tables to quickly validate certain data)
  • Automation Framework - ensuring we are keeping libraries and dependencies up to date
  • Automation Checks/Tests - When tests fail, we investigate/explore and either open a bug, determine our automation needs to be updated due to an intended change (maintain), determine the test is flakey (i.e. needs to be refactored/maintained), or discover an infrastructure issue.
  • Our team doesn’t rely on test cases so that’s a big thing we don’t have to maintain.

What are the challenges:

  • We don’t have any challenges I can think of with maintaining our testing.
  • When we were first building out our automation and we were doing similar things 3 different ways, maintaining that was a challenge, but we’ve been able to refactor and make things consistent.

Testing Debt:

  • 6 months ago it was different patterns we were following within our automated tests.
  • 3 months ago it was not able to quickly access a test report (after seeing pass/fail) Our old process was to download a zipped artifact from the test run, (10 mb) unzip it, start an HTTP server from that directory, and then visit the URL. Rather we implemented a way to upload the artifacts to an S3 bucket and add the link to the Slack notification for our test run.

Reflecting on my team and processes I am proud of where we are.

  • What sort of testing are you maintaining?
    In my last job, related to testing, I used to maintain for myself:
  • some scripts I’ve written when interfaces I was using were changing;
  • thousands of accesses, credentials, guides to using dozens of tools, multiple platforms links(I was the first one that a few teams came to when they needed something);
  • copies of production standard API requests, default basic API requests;
  • a local dev VM with a connection to >20 code repos and products available to run locally at any time;
  • guides, specifications - obtained from emails or meetings or random docs about the business logic of hundreds of features;
  • notes of things that I’d like to do and also those that I’ve done the previous day;
  • test data/test beds for dozens of features, services, and products;
  • What are the challenges in maintaining testing?
  • the more I used to know and learn the harder it got to organize the knowledge and sometimes remember where it was: some were in dev, company, project Wikis, in local text files, in Evernote, in Teams Project/Jira, in Words/Excels, Postman, SoapUI, Browser extensions, code repos, and so on;
  • What testing debt are you carrying?
  • I used to have a backlog of things to do; Some were disappearing as priority changed, some were done by other people, and some were reshuffled as the days passed; I used to discuss almost weekly with some senior engineer or a product manager to get their opinion on some things as to get more support or self-assurance that it’s not all that bad.
  • I was carrying more worry about those higher-risk product places that I didn’t get to test in-depth at the time of a release; I usually was doing the days immediately after release adding on top and helping myself with some production data; sometimes big bugs were found, other times nothing.

Thanks everyone for sharing. I’ve added your thoughts into our next Task analysis session and I’ll share what we learnt very soon.