Automated Test Run Timing


(Fabio) #1

Hi All,

I was wondering whats an acceptable time that automated regression test suites should run for? I mean, today we’ve got our automated regression test suite, taking about 10 hours to run 120 TCs and we are looking to improve this timing, however I’m after a benchmark timing, so we can aim for it. Btw, we are using selenium to automate.

Thank you!


(ernie) #2

There’s no absolute benchmark or industry standard. The timing threshold should be based on what your organization needs.

If you’re doing (or aiming for) CI/CD and expecting to be able to release/deploy frequently, then you need to get your tests to run in a window that’s not going to gate releases.

If you’re just running tests overnight, and are fine kicking the tests off after the last commit for the day and seeing the results in the morning, then you likely shouldn’t spend a ton of time working on speeding up your regression suite.


(Bill) #3

I would not trust something that runs for so long. If you get one error on your CI, like a failure in build, you lost half a day.

Can you make those tests run in parallel? Any wait statements in there that you should remove?

We have a sanity test suite that runs in about an hour, the logic behind that was that you commit your changes, go for a lunch and by the time you’re back you know if you broke anything.


(Tom) #4

If I was brought in to help a team who were using this test pack I’d be very worried by that time frame. Testing is about information and having to wait 10 hours is a long time to have to wait for it. Generally, the quicker the tests are, the more they can be run. An ideal is them running on every commit (not just to master).

What stands out for me is not the total time though. Rather, the average time for a single test. An average of 5 minutes is a very long time for one test to run.

Things to look for:

  1. Is the test ‘clean’ enough? Is it only testing one thing or are there multiple asserts throughout the test?
  2. Do you need to use Selenium for the whole test? Can you perform the set up (log in, set up data, get to the right screen etc.) using the APIs rather than mouse clicks?
  3. Can the test be run quicker? Are there a lot of waits in your test that can be removed or made more intelligent?
  4. Is the test needed? Regular pruning of tests is important to keep the test pack relevant. If the test no longer provides value, remove it.

For the test pack as a whole, I’d look at whether it’s possible to run them in parallel. Also whether the test server itself would run faster with a better CPU, more memory etc.


(Belean Alexandru) #5

The running time depends on the complexity of test cases and also the app’s complexity. So, the regression time run is different on each project. For eg. on my current project it takes a couple of days, as the project is very complex and there is also hardware automation integration.

As @billkav said, you can split them and run them in parallel. This is decreasing the running time.


(Fabio) #6

I agree and I’m also worried…We are reviewing our scripts and looking for areas that we can improve this timing.


(Ady Stokes) #7

Hi @fabio, we have a number of suites that run concurrently and while we don’t have any static rules on their timings we do review when they are over 30 minutes. Most are under 10 minutes with a couple between 20 and 25. Not sure if this information will help you but good luck on improving yours