🧹 How do you prepare an environment to run automation against?

Hi all,

Another question for you all. Let me set the scene…

You have an environment setup to run automated checks against, you also have a large collection of automated checks to run and you’ve already run your automated checks on your environment previously. My question is…

How do you prepare an environment to run automation against?

Items in my mind are going to be setting up and tearing down data, feature flags, resetting caches, etc.

I’d love to know what steps you and your team need to take to prepare an environment so that your automation runs without issue.

4 Likes

Hi I understand your request for a specific question and I typically prefer not to respond with another question :slightly_smiling_face: but I’m genuinely curious and would like to delve deeper into this topic. What are your thoughts on whether QA teams should be responsible for preparing test environments, or should it fall under the purview of Cloud Ops?

While I’ve collaborated on such tasks before, I felt somewhat out of place. I recognize that the line between these teams is often blurred, and I did contribute to the process however, in my view, the responsibility for this task should not solely rest on QA. Perhaps specifying the requirements is reasonable but the actual preparation of environments seems more suited to Cloud Ops. Some individuals within QA may lack knowledge of infrastructure and lack access to Cloud services, making it challenging for them to effectively determine how a test environment should be set up. I’d love to hear your thoughts on this matter.

Items in my mind are going to be setting up and tearing down data, feature flags, resetting caches, etc.

Ideal solution, if your cloud or server setup supported it would be to spin up a starting clean cloud/server/VM environment for the system under test for the tests to run against. The base clean env, would have some default configuration that you want to test against. Any other configs afterwards would be generated by the tests themselves.

At end of test, you could just tear down the environment and all the state with it, archiving whatever logs or test data you need from it before tearing down. For cases where you can’t tear down the test environment at end of test, then you could either:

  • have test teardown steps to under test configuration/data that you generate as part of the tests

  • design the tests to always generate new data on top of base system state, so that you never need to clean up test data/state, you just leave it in the system, and either it expires (TTL) over time, or you just leave it alone as junk data or whoever manages the env can later cleanup the data in the env as needed

Also wanted to mention that for some cases, as I have before, you test on actual hardware and local systems, not a cloud environment or a server in some lab you don’t have access to. So for local hardware case, QA may be managing the equipment, or Ops or a joint management effort.

And in my case, when I was testing on such hardware, we handled state for testing by using a clean base state image of the hardware (customized server hardware that runs Windows server OSes) that we created (or backed up) - using Windows PE tools to clone the installed clean state of OS on the hardware into an image file. That we can easily redeploy onto the the hardware, overwriting the previous image on the hard drive. But this step is still manual and physical to do, so we never had automation for it, but the rest of the test automation was scheduled to run automatically. Just this reimaging was manual. So we never had to worry about test data cleanup.

1 Like

At the end of the day the question itself is didactic. What a individual engineer does however depends on what it is that they care about, not what the correct answer is, because the question is worded directed at a person, not as a general question. Grammar matters. Didactically speaking you have to prepare every interface boundary and specify it’s behaviour, which just does not scale, but is the correct answer.

I merely uninstall the application, the operating system does in fact always trigger it’s cleanup because the target operating system is highly secure and must clean up. There are “literally” no corner cases for this cleanup doing all the work for me. Environment problems where the app under test or the test scripts modified a system setting not connected to the app, explicitly, is the biggest case I thus have to worry about. So that is how “I” prepare my environments, I uninstall the app. However many of the environment faults I get are caused by my automation toolstack changing or by our app dependencies changing, not the system under test changing. Every test case uses a new account, the account is really small, so we don’t delete it and only clean them up roughly once or twice a year.