Warm-up test for automation

Good morning to you ALL

I need advice for warm-up test in automation.
I build my automation and now it is starting to fail as well with the command “Thread sleep” in my tests.
In the first run there fail and in the second run there pass, there is a time issue till the data is ready to show the right results.
Is it a good idea to write a warm-up test for only the failing tests? Or is there an other way to make it more efficient?

Greetings Silvia

1 Like

I have this problem as well. We have a .Net application and anytime after a deployment, the first run or two for certain tests fail.

There are a few ways to solve the problem, our solution was to build a tool that runs as apart of our deployment process (we call it the Primer) that sends an HTTP request to each web page in our application. We use Ruby with the ‘rest-client’ gem. After this we still noticed a few issues on certain pages. Our hacky solution to this was to setup automatic re-runs (2 additional only running failures) which was pretty easy with ruby/rspec what we use for our UI automation. This was the quickest way we found to have our tests give the developers confidence in the automation.

Now that we have had it running with re-runs for a while we are wanting to find ways to make the failing tests better or to get a better grasp on what is causing those failures and come up with solutions around each scenario. To get there we are building a database where we log test runs, pass/failed, time it took to complete, etc to identify our bottlenecks and those tests that fail often, in hopes to make the runtime shorter.

If anyone else has solved this problem I’d love to hear others. I know our way isn’t the best but it is working.

Butch

1 Like

Good morning Butch

Thanks for your advice.
Last Friday we to change the “settings” from CPU and Elastic search, to make all faster. And it is works fine till now the tests are not failing and my results are fast for the tests. (no need for warm up tests)

If I have any other ideas in future i will forward this to you.

Greetings from the rainy Netherlands
Silvia

I believe that this kind of client/server warmup testing mistake is pretty commonplace actually - and adding a delay or even running a second time are both going to hide problems later on. It’s happening because automated tests always test other valuable things that manual testers overlook. Timing and stress.

Instead I suggest: Add a separate test up front, called warmup, that performs a basic server authentication to the application, but write the test so that it retries for a fixed maximum duration. This will ensure that all tests run after it will run as intended… but that’s not where it ends. Because your special warmup test has 2 purposes, it’s measuring the amount of time taken to warm up or compile the server application whenever it gets freshly deployed. And that is a vital “performance” test. So you want to make sure that your logging shows the time that the warmup test waits for clearly. And it’s even a good idea on the production system to trim the maximum time for the warmup test to around 200% of the average time it takes. This will help you flag any wide variations and hopefully application startup performance issues early on.

At this point, I am assuming that all test results get logged onto a huge fileshare and also saved in some kind of database - even Excel/CSV is good enough. Bad application performance does not happen overnight, degradation occurs over a long time. Time which will never get noticed by a manual tester. Think of automation as a stress and timing test, and you get 2 data points out of every test instead of just 1.

2 Likes

Great feedback! We are currently working on a solution to log our test runs passes and failures. Plugging in a warmup test like this should be pretty easy once we have that in place. I like the idea of tracking the warmup time to complete as we introduce new changes to our systems (system patches, version upgrades, and code changes).

1 Like