How do you ensure your Automated regression 'testing'(checking) suite is effective and relevant

Your regression testing suite has now grow into 5000+ scenarios and runs daily.

What that gives you? Confidence? Peace of mind? Nothing apart from something runs everyday?

When was the last time someone reviewed your regression testing suite? Do you know what’s in your regression suite? Do you care? Is it still on track with your original goal? Is the original goal still relevant?

Comments :point_down:


Great question @testerawesome!

It highlights the side of automation that seems rarely considered, and it helps facilitate that conversation.

I’ve seen regression suites that have over a thousand tests and the team appears to boast about that number. Of course, it was never about the number. Tests are there to help us learn about application behaviors and there changes due to code updates.

When you pursue automation, you pick up both ends of the stick. You create the automation to assist in testing AND, just like any program, you accept responsibility for its maintenance. In my opinion, a regression that has many, many tests probably has duplicated tests and stale tests. While you might find maintenance opportunities when tests fail, you may not discover the tests that need to be scrutinized for value and relevance.

Some methods of ensuring automated regression relevance and efficacy are frequent execution of all tests, and periodic inspection. The inspection reviews each test for information objective, relevance, and value.



The frequency of which the tests in a Regression Suite should be reviewed varies depending on the type of software that is being tested. For example, my last testing job involved testing a new web product that was being completely revamped, ui and all from one week to the next. So our regression tests for it were constantly changing and needing to be updated continuously because “existing functionality" was also constantly changing. Alternatively, we also tested an older legacy version of the software that not many changes were made to, therefore, the tests in the Regression Suite for it did not need to be updated nearly as often.
I believe the key is to make sure the Regression Testers are involved in and fully aware of new features being developed as well as how the defects are being fixed. This will save a lot of time because the Regression Testers will be able to spot changes that may be needed to the existing tests ahead of time. Also, anytime functional tests are run, if there were new tests created that should trickle into regression testing, the regression suite should be updated as well.
Also, the entire team especially the Bas, SMEs, and developers, should review the regression suite regularly to offer feedback.
Having the regression tests up to date gives peace of mind that the coverage of those tests is still valid when the tests are ran and that the results are accurate and not based on incorrect expectations.


Good question to which I do not have an answer, since in the large regression suites scenarios I’ve been in this have not been solved.

I do have an area which I would like to explore and that is Mutation Testing which has the main purpose of measuring the quality of your tests. It randomly mutates your code and thus introduces bugs in your code. You then run your tests to see that they manage to capture these bugs. This enables you to spot weak and redundant tests.

Inspired by a colleague of mine it would also be interesting to create a ranking system where every time a test finds a valid bug it gets a point and every 10-20 runs get -1 point. Then you should after a while get some information about the value of your tests.


Sounds really interesting, thank you for sharing the mutation testing. I am going to research more about it this week.

It will be great if you could share your story after you explore the mutation testing

Hello @ola.sundin!

I would explore a rating system cautiously. There are many goals in executing a regression suite. Finding a bug is one goal. In my opinion, a quantitative assessment of regression suite efficacy would need to be tempered with qualitative methods such as conversations about information objectives, purpose, changes in the application, or changes in technology.



Ensuring automated test cases are covering the regression suite at a certain level and effective is important for automation leads. We will consider below points to put our views:

  1. If you have 5000+ automated test cases or scenarios, it’s really difficult to observe what is duplicate and what is pending to be automated. Well, quality assurance services provider have both manual and automation teams for a project. They can use the following approaches:

    • Here, functional team updates the test cases for updated features. Those test cases should be passed to automation team for re-factoring of automated scripts.
    • Automation team must ensure to remove the code which is not relevant.
    • Dev, functional and automation teams must be in sync. It will help automation team to keep making required changes in automation suite.
    • There must be internal audit for automation framework to update or remove the technologies or strategies used in frameworks.
  2. Teams must maintain the status of test cases to Manual or Automated in test management tool to ensure all teams. The test cases which are marked as Automated, should not be run by manual testing team to save the regression time.

  3. Functional test teams to create new test cases for newly introduced features and must pass to the automation team to automate them.

Hope this information is helpful for you.

Your regression testing suite has now grow into 5000+ scenarios and runs daily.

The number doesn’t make a difference on anything to me at least.
How many are commented, or checking if 0=0, or just clicking around without assertions, or asserting the irrelevant things, or duplicated between themselves, or already duplicating unit/api checks, or flaky, or outdated, or …

But there are managers who count as a sense of progress and coverage.

What that gives you? Confidence? Peace of mind? Nothing apart from something runs every day?
Maybe this should be addressed to the managers paying for it, and to those managers that use it as a guide for the releases. I am constantly challenging what’s automated and how - what’s the value of doing that compared to building or updating one new feature each week? Or even building a completely new product that can get the company another source of revenue. It depends always on the risks that each application has, the context, and the motivators behind the actions.

When was the last time someone reviewed your regression testing suite? Do you know what’s in your regression suite? Do you care? Is it still on track with your original goal? Is the original goal still relevant?
I’ve been in about 7 automation projects. Never was one of them reviewed.
I do not know. I do not care - as I don’t own this project.
There usually isn’t an original goal/mission.

1 Like

Code coverage metrics is your goal, not number of cases or number of environments, because new environments always exist, but 80% of your customers only run 20% of your code on 1 environment. Verify that 1 environment and the 20% of the code.
We however get there using a shotgun initially, and a huge net, because that works as a heuristic in it’s context.

Thank you for sharing Conrad. So how do you measure code coverage with non unit tests?

That depends on your coverage tooling, I’m talking about executable languages and use opencover/Bulldog/ccover which run a binary with either instrumentation or symbols and generate a log of all lines touched. All one does, is take any E2E test and run it with the coverage logging enabled, then grab then stop the tool and export the metrics, which normally are a html page as well. This normally means installing the app, copying all the debug binaries over the installed ones with symbols, fixing any security problems that this creates… and so yes, it’s never easy to get live coverage data, it creates many many problems, but that’s why we do this work on a computer, it’s great at problems. You want to scan stackoverflow for hints.

Yes @testerawesome , I am glossing over the mechanics of code coverage instrumentation. It’s worth a separate topic, but when it comes to worthwhile automation, the most valuable automation is a script that more than one team or more than one situation can benefit from. I have even seen test code used to test product deployment find it’s way into production release media because it simplified the complicated deployments for real world customers as well as us testers. When customers actually bugfix your test code, (obviously they did not know it was originally test code) it totally levels you up.

1 Like

If you build it as an oracle for the construction of the product itself, by definition, you will know it serves the purpose of covering the product behavior; no less, no more.

1 Like

Thank you for sharing your thoughts

1 Like

Thank you for sharing your view