Automated Test Run Timing

Hi All,

I was wondering whats an acceptable time that automated regression test suites should run for? I mean, today we’ve got our automated regression test suite, taking about 10 hours to run 120 TCs and we are looking to improve this timing, however I’m after a benchmark timing, so we can aim for it. Btw, we are using selenium to automate.

Thank you!

1 Like

There’s no absolute benchmark or industry standard. The timing threshold should be based on what your organization needs.

If you’re doing (or aiming for) CI/CD and expecting to be able to release/deploy frequently, then you need to get your tests to run in a window that’s not going to gate releases.

If you’re just running tests overnight, and are fine kicking the tests off after the last commit for the day and seeing the results in the morning, then you likely shouldn’t spend a ton of time working on speeding up your regression suite.


I would not trust something that runs for so long. If you get one error on your CI, like a failure in build, you lost half a day.

Can you make those tests run in parallel? Any wait statements in there that you should remove?

We have a sanity test suite that runs in about an hour, the logic behind that was that you commit your changes, go for a lunch and by the time you’re back you know if you broke anything.


If I was brought in to help a team who were using this test pack I’d be very worried by that time frame. Testing is about information and having to wait 10 hours is a long time to have to wait for it. Generally, the quicker the tests are, the more they can be run. An ideal is them running on every commit (not just to master).

What stands out for me is not the total time though. Rather, the average time for a single test. An average of 5 minutes is a very long time for one test to run.

Things to look for:

  1. Is the test ‘clean’ enough? Is it only testing one thing or are there multiple asserts throughout the test?
  2. Do you need to use Selenium for the whole test? Can you perform the set up (log in, set up data, get to the right screen etc.) using the APIs rather than mouse clicks?
  3. Can the test be run quicker? Are there a lot of waits in your test that can be removed or made more intelligent?
  4. Is the test needed? Regular pruning of tests is important to keep the test pack relevant. If the test no longer provides value, remove it.

For the test pack as a whole, I’d look at whether it’s possible to run them in parallel. Also whether the test server itself would run faster with a better CPU, more memory etc.


The running time depends on the complexity of test cases and also the app’s complexity. So, the regression time run is different on each project. For eg. on my current project it takes a couple of days, as the project is very complex and there is also hardware automation integration.

As @billkav said, you can split them and run them in parallel. This is decreasing the running time.


I agree and I’m also worried…We are reviewing our scripts and looking for areas that we can improve this timing.

Hi @fabio, we have a number of suites that run concurrently and while we don’t have any static rules on their timings we do review when they are over 30 minutes. Most are under 10 minutes with a couple between 20 and 25. Not sure if this information will help you but good luck on improving yours

1 Like

At my company it can take the best part of 10 hours to run the “core” tests.

As said above, it does depend on how complex your system is. Ours has a lot of JavaScript so we have to use waits otherwise the whole thing falls over when there isn’t a true bug in the source code. There isn’t much that I have found we can do with it - I could be wrong though.

The project was originally developed by an outsourcer who used “time.sleep()” in so many places. Maybe you have something similar that could be refactored?

1 Like

For those with 10 hour plus test suite times. Have you considered a risk based MoSCow assessment?

Break up your tests into those that absolutely must be run. Those that should be run and those that could be run. You should be able to identify a set of core risk tests that take a much shorter time for quicker feedback. If you have lots of time run them all, if not make a call on the should’s.

Just a thought. I appreciate ‘separating’ the existing tests into multiple suites may have complexities all their own.

1 Like

I’m echoing what others have said already - there’s no benchmark number for how long a regression test suite should be, just as long as it provides value to the team. A good place to start is asking yourself a few questions about your GUI test suite:

  • If it was faster, would it be run on a more regular basis?
  • Does it provide confidence in the application?
  • Does it spot any issues?
  • If a check fails, how and when does the team respond to this failure?
  • Do you also have unit and integration tests?

Currently we have two CI pipelines (one for desktop, one for mobile). Each has a platform specific suite of GUI tests (along with a greater number of unit and integration) which run as a stage in the pipeline. These GUI tests currently take around 6 minutes on average to run. Each time someone commits code, these tests run.


Hi @fabio

Whilst I agree there is no specific target to aim for (more on that in a minute) 10 hours for a 120 tests is a considerably long time! Automated regression testing is about fast feedback to enable the team to react quickly to changes in the application. If you are waiting a day and a half (working hours) to get that feedback then those changes may have already moved on or morphed as Dev work continues. So you definitely want to get that run time down.

That said, the focus should always be on good feedback from automation and not from speed. So you have a couple of options:

  1. You mention you are using Selenium so I assume that your tests are focused on the UI. Go through your tests and ask the question: Is this Testing the UI or Testing Through the UI (TuTTu)? If the test is focused on risks and behaviour that the backend is responsible for, like storing data or processing values. Then turn that test into an API test. If you aren’t testing code in the UI, then you don’t need it (Check out YAGNI). API tests tend to run faster as they don’t have to run browsers, look up elements and wait for pages to load. If it is testing the UI then go back to my previous point to see if you can cut down steps.

  2. If it is UI based look at ways in which you can improve the speed of your tests by cutting out unnecessary steps or pushing actions down the stack but keeping the assertions in place. For example create data via API or DB calls rather than through the UI. Once logged in, only log out when it’s relevant to the thing you are testing. Navigate directly to pages under test rather than attempt to mimic user flows (tools aren’t humans so you aren’t getting the same feedback). I did a talk about this a few years ago with practical examples that might be worth checking out:

  1. Finally, focus on the changes that matter to you and the tests focused on those changes. If an automated test isn’t giving you any feedback value, kill it.

A bit of topic, but I hope that helps you in bring the runtime down a bit.