How do you "Maintain Test Environments"

Hi all,

We’re continuing our work to create a curriculum focused on automation that is created with feedback from the testing community. We’ve already run a series of activities that have helped us identify a list of key tasks that helped us create this Job Profile .

We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:

Maintaining Test Environments

We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below.

  • Identify what requires maintenance:

    • Product version
    • Dependencies
    • Product configuration
    • Environment configuration
    • Data management
    • Rebuilding/resetting a broken environment
    • Access management to product and infrastructure
    • Integration with the pipeline
  • React to smoke tests / monitoring output / live issues

  • Making sure environment syncs with live

  • Making sure that environment data are sanitized and cleaned to not contain production data

  • Check if the environment is working properly (no memory leak or any other problem)

  • Connect to the environment and:

    • Use CLI tools to debug issues and manage infrastructure
    • Run scripts to install / update environment
    • Connect to DBs to manage data
  • Test the environment and make sure it works

  • Establish a pipeline with test environment configuration

  • Test if the environment is connected to the right environment (db, application and so on)

What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?

What do you do when an automated test fails?

5 Likes

Good to see people talking about Test Environments.

I’d suggest:

  • Establish a Test Environment Booking System (for larger companies at least).
  • Standardise/Runsheet your Operations to ensure Consistency & Knowledge Share.

Coming from a slightly different angle, we built a similar thing to help our clients understand & measure the broader aspect of Test Environment Management. We called it the Environment Management Maturity Index (EMMi).

The 8 Areas of Focus:
(1) Model / Understand your Environments
(2) Manage Demand i.e. Test Environment Booking Management
(3) Planning & Coordination e.g. Calendars
(4) Non-Prod Service Management e.g. Incident Mgmt
(5) Application Operations e.g. Release & Shakedown using RunSheets and/or Automation
(6) Data Operations,
(7) Infra Operations and finally
(8) Status Accounting & Reporting e.g. Insights to Value Stream your Environments & Releases.

If interested there are more details on our thoughts here: The EMMi.

Thx

1 Like

Probably the biggest cause of environment failures is 3rd party stacks.

What I mean is, how anyone who has used a cloud provider to build environments will tell you.

  • Spend an hour a week just learning about and updating apis and things that changed in our cloud provider since last week
  • Spend an hour a week looking at ways to reduce costs, or speed up instances

Context varies, but maintaining environments requires communication skills. This week I was looking at a bug that only happens on one MDM tenant, but does not reproduce in the one set up by test on the same MDM provider. Turns out that you have to know a whole load of things about MDM policies, how the OS interprets these and how future versions of the OS will work with those policies. I lately try not to use the work “maintain”, but rather “evolve”. In my experience you need to cover the task list above Mark created first. And once it works for your teams, keep reviewing it often.