How long on average should test automation take to execute all checks in a CI/CD pipeline?

Currently we have test automation running for every time a deployment is done on our staging and production environments. The test automation usually takes 12 mins on average to run.
They are UI tests that execute different user flows in the web app and assert confirmations to make sure the common flows are working fine.

Is 12 minutes too much or just fine?
One of our devops guys says that 10 mins is the limit written somewhere in a book and we should make sure the test automation runs no longer than 10 mins…

2 Likes
12 min is fine as long as your whole product team agrees. In our projects with thousands of automated UI test cases, to make the best use of limited VM resources, we often have the following test suites run by their corresponding pipelines:
  • smoke, every build when a commit is pushed to a branch, duration varies depends on the product team’s preference, usually under 20min
  • nightly, run a bigger subset of test cases that fit into the night time non-working hours (So, during daytime, smoke is run frequently, during night time, run the nightly suite. All using the same group of VMs.)
  • weekend, run a full regression
  • monthly, run a full regression with some constraints, like low network or low RAM

These test suites are controlled by the SME or manual test engineers, they tag the test cases with smoke or weekend in ADO, the automation engineers automate the test cases, then the pipelines dynamically assemble the automated test suites for execution.

LogiGear - here to help shape the future of test design

2 Likes

Ours is running 1,5-2 hours and is also queued and has therefore some waiting time.
But it’s only done on dedicated test systems. We don’t have automated checks on staging or production systems.

2 Likes

I think this heavily depends on your context, what you as a team agreed on, what your process looks like.

  • Is someone blocked in the 12 minutes these tests are running? Does it slow down your process, because maybe you could start testing earlier if it ran faster?
  • You as a team want them to be faster, because $reasons?
  • How much would it cost (effort, money, time) to make them faster (eg. see if you can improve the execution time per test script, parallelize test execution (maybe you need additional machines for that, which could mean extra cost).

Just because someone read something in “a book”, doesn’t mean it’s true in every situation (although I assume in that book there’s reasoning given for why the author came to the conclusion of 10 minutes).

2 Likes

An aspect that hasn’t been mentioned is what is a person trying to achieve through automation:

  • coverage: how many sensitive, essential functions or flows you want to cover;
  • value and risk: business value and risk of failure for the potential problems you want to catch;
  • product, data, and environment variations: product size, complexity, servers, setup, data generation;

Some examples I’ve seen around:

  • MS Office automation several years ago was made of lots of machines, 20 thousand tests, executing over many hours(a couple of days sometimes)…
  • In an aviation business-related company, a suite of 14k automated checks run on a single environment which needs about 1 hour to spin, for about 10 hours.
  • In an e-commerce application, a manager wanted to execute the ~300 checks on some combinations of 3 OS, 3 browsers, 7 devices, 6 viewports. The initial setup took 3 engineers about 5-6 months. And even parallelized due to hardware limitations this would still need about 7 hours to run.
1 Like

This is old thread, so I’m unsure if this is still something you are interested in.

The argument of devops person is easily dismissed. What book says that? Where? Why? What rationale do they provide? When devops guy comes back with answers to these questions, there’s a room for a discussion and reaching mutual agreement. Until then, this is just an arbitrary target that is taken out of context.

Now, time between developer making a change and that change being visible to customers is one of DORA metrics, and it’s generally understood that keeping it low is a good thing. Of course you can speed this up by not running any tests, and speed this further by having developers changing code on production directly. So while we want to keep things fast, there are some more important things that we are not going to sacrifice. And the key becomes to understand what these things are, and where is the lower bound of their speed. Most test suites out there could run faster than they do.

What devops person might be alluding to is how often you deploy - which, coincidentally, is another DORA metric. Assuming you can’t deploy in parallel, and each deployment taking is 12 minutes, then during standard 8-hours day you can do no more than about 40 deployments. Is this a lot? I certainly saw teams that do more than 40 commits a day. For these teams this deployment time could be a bottleneck. But usually you try to solve this by splitting a whole system into components that can be developed and deployed independently.

For the reference, in my current team we have few pipelines running for each PR, and it takes about 30-35 minutes to get them all to complete. We might be able to shave off 5-10 minutes with reasonable compromises, and probably next 5-10 minutes with significant engineering effort. I would be happy for someone to claim we can and should get below 10 minutes while we are at 12.

1 Like

That brings me to: Maybe some of these checks aren’t relevant (anymore) and could be deleted, which also would reduce execution time.