30 Days of Automation in Testing Day 21: Try and speed up your automated checks execution time, and share what your tried


(Quang Le) #1

In my project, I faced the problem that the automation regression scripts took a long time to finish. The big products took about 1 or 2 days, the smallest one took about 1 hour to finish (there are so many cases comparing with the database, get API, write to file excel etc…). The framework was developed by the old team, it was not stable, usually running failed, didn’t follow any coding convention or best practice. So when we have inherited it, we must do below actions:

  1. Define Definition of Done (DOD) when scripting a new test cases
  2. Define Coding Convention and follow strictly
  3. Refactor and Clean Code the framework and the test scripts: after reviewing the source code and analyzing it and many things were applied to this step:
  • Apply new structure for the new framework to make it stable
  • Single responsibility principle applies to a class
  • Separate the long methods following the coding convention
  • Apply design pattern to the framework
  • Remove string concat by string interpolation
  • Remove the implicit wait, replace by the explicit wait
  • Apply SoftAssert and GroupAssert
  • Remove unused classes
  • Optimized all classes
  • Avoid to using xPath locators with index value
  • Refactor core for webdriver
  • Return the empty list instead of null
  • Move methods from test class to page class
    etc…
    There are so many things need refactor the framework and test scripts. Above lines are just the highlight
  1. Weekly code review in our team
  2. Cross review code each other
  3. Read and review the checks of test cases to make sure there are no necessary steps in test scripts
  4. Apply dynamic data to reduce the investigate for out of date data: As you know, all automation scripts have data-driven. The data will be expired on websites or in database or API, and the scripting QA should update them by manual (searching, query, update to data-driven files). My team created a Dynamic Data project to make it automatically, just one click all new data will be generated and created. And we can run the automation scripts again with the newest data
  5. Add option to turn on/off logging
  6. Remove unused log code
    etc…

And there are some more steps to increase the performance of our framework and test scripts. After 3 months, the performance was increased so much, about half of the execution time was reduced. The biggest product was reduced from 2 days to 5 hours of execution time, and the smallest one is just 30 minutes now


(Karishma) #2

#Day 21 :Speed up your automated checks execution time, and share what your tried
General guidelines to improve end-to-end testing execution speed for GUI automation.

1- Build a solid Object Repository by using unique attributes/properties to define test objects.
2-Devide test script into small test functions that are simple (Do not re-implement complex business).
3- Define correct points where to add Sync functions (moving to new page, waiting to object appear/disappear , waiting for property change,…etc).
4- Use Dynamic waiting not implicit, for the correct behavior / property of a test object.
5. Run tests in parallel(using shardTestFiles/maxInstances)
6. Run tests in a headless browser(from version 59, chrome can be run
headless natively)


(Trung) #3

Below are some tips to reduce the execution time on automation testing:

  1. Create tiny but valuable test suites
    Choose the most important tests and pull them into a smaller suite that runs faster. These are usually very gross-level tests, but they’re necessary to qualify your system or app for further testing. If these tests don’t pass, it doesn’t make sense to proceed.
    A good starting point would be to create a new entity and perform the most important operation on that entity.

  2. Refactor the test setup
    For example, one team had a suite of UI-driven tests that took a long time to execute and had many false failures due to timing issues and minor UI tweaks. We refactored that suite to perform the test setup via API commands and do the verification through the UI. This updated suite had the same functional coverage, but it executed 70 percent faster and had about half the false failures caused by UI changes.

  3. Be smart with the wait times

Avoid using the sleep statements and see if you can replace by a smarter wait statement that completes when the event happens, instead of a set period of time.

  1. Trigger tests automatically
    You may have several test suites that are normally initiated by a person during the test phase of a project. Often, it only takes a little shell scripting to be able to include these tests in the continuous integration suite.

  2. Run tests in parallel
    Virtual machines and cloud computing services coupled with tools that help automatically set up environments and deploy your code make it much more affordable to run tests in parallel. Examine the test suites that take some time to execute and look for opportunities to run those tests in parallel.


(Kris) #4

A few others have already mentioned it - breaking down the tests into smaller units, especially UI tests. Not only makes the UI tests a bit less fragile and easier to understand, but also speeds up execution. And not using implicit waits at all speeds things up and adds reliability.
Similar to what @vic mentioned; having a dependable set of predefined test data is very helpful.


(Pablo) #5

Similar to what @karishma said, my original problem involved the following:

Problems

  1. Too many functions happening
    a lot of verbose syntax and unnecessary UI waits anticipating lags from the back-end.

  2. Too much of the Step -> verifyThis() -> doThis() -> Step
    Another thing that choked the script was adding assertion / verification checks for each step of the test. What ended up happening was an unnecessary delay in test execution often leading to false-negatives or script failures.

  3. Spaghetti code:
    As newbs to automation, the mistake I’ve seen is poorly written tests with a superfluous amount of steps and repetitive code to accomplish the simplest of tasks. I was guilty of this for a while.

To demonstrate, I’ll be using Katalon in a side-by-side comparison:

before
A lot of times, a test would look something like this:

WebUI.openBrowser(“http://www.example.website.com”);
WebUI.waitForElementPresent((findTestObject(‘pathToObject’), 10);
WebUI.click(findTestObject(‘pathToObject’);
WebUI.setText(findTestObject(‘pathToObject’), someText);
WebUI.click(findTestObject(‘pathToObject’);
WebUI.waitForElementPresent((findTestObject(‘pathToObject’), 10);
def pageTitle = WebUI.getText(findTestObject(‘pathToObject’);
assert pageTitle != null;
assert pageTitle = “Success Page”;
WebUI.closeBrowser();

As you can see, the test is a bit hard to read and not very descriptive of what is going on. At least not without the need for comments.

Solution
So what worked for me was changing how I approached test composition with a strong focus on making it legible so that anyone can look at the test and know what’s going on.

Having applied the Single Responsibility Principle I spoke about in a previous post, I rewrote the test to look something like this:

after

WebUI.openBrowser(pageUrl);
WebUI.click(registrationLink);
onRegistrationForm.CompleteAndSubmitForm();
onProfilePage.VerifyInfo();
WebUI.closeBrowser();

Notice the following:

  • pageUrl - I didn’t need to explicitly write out the site url, I can declare a variable and reference it
  • registrationLink - as stated, I declare my variable elsewhere and call it here
  • onRegistrationForm // onProfilePage - these are classes I create in a separate file and import it as part of a function (package).
  • _SubmitForm( ) // VerifyInfo( ) - Each class has a set of methods. A class can have many functions, but there should not be any occurrences of multiple classes on the same file (hence SRP). The exception being helpers … but that’s another topic.
  • I also reduced unnecessary element checks on the test and have them happening as separate actions in the aforementioned classes. The end result makes it easier to maintain the test by fixing only the function that needs it.

Conclusion
While the “Solution” example was written in Groovy, I’ve applied similar concepts when writing tests in JS or Python. I like that the test is readable and that anyone can see what I’m testing in seconds.

It makes it easy to pair the test scenario with the acceptance criteria to ensure the proper workflows are being tested.

It also makes it super-simple for anyone inheriting my project to see what I’m doing and pick-up where I left off. An import thing for teams.


(Heather) #6

From our friends on Twitter:


(Kumar) #7

Large enterprise automation demands to keep the execution time as low as possible. For example consider this scenario

Make a new transaction for a new user and the admin need to validate the transaction and take action (approve / reject). two scenarios are

Create User1 > Login User1 > Make a transaction1 > Logout(User1) > Admin Login > Validate Transaction> Approve transaction1 ::Execution Time (8min)
Create User2 > Login User2 > Make a transaction2 > Logout(User2) > Admin Login > Validate Transaction> Reject transaction2 ::Execution Time (8min)
To reduce the transaction time we can combine the above two different scenarios and make into one.

Create User1 > Login User1 > Make two transaction (1&2) > Logout (User1) > Admin > Login > Validate Transaction1> Approve transaction1 > Validate Transaction2> Reject transaction2 ::Execution Time (12 min)
If the business logic allows such transactions as described, it’s always better to combine the similar patterns. So that the test execution time can be dramatically decreased.

In the above example I had saved 4 min / Test Case.

Credit: https://qakumar.wordpress.com/2018/07/26/day-21-try-and-speed-up-your-automated-checks-execution-time-and-share-what-your-tried/


(AMIT) #8

Similar to @karishma

  1. Build a good Page Object Model.
  2. Having test functions that are simple and reuse.
  3. Use Dynamic waiting not implicit, for the correct behavior / property of a test object.
  4. Run tests in parallel
  5. Run tests in a headless browser