What data/state do you have to manage for your automated checks?


Today I’m looking to learn more about data management in automation. My question is:

What data/state do you have to manage for your automated checks?

Those new to automation sometimes don’t realise that managing test data is one of the biggest challenges in automation. That’s why we’d like to hear what data you manage so we can demonstrate to learners the wide range of data/state we have to consider for automation.

Looking forward to what will be shared for this. Thank you again for getting involved :robot:

1 Like

I’ve seen two very different approaches.

In my former workplace, there was a setup automation project that ran once a week. This project generated all the data used by the test automation that was run every weeknight - customer records, event records, product records… The setup took about 4 hours to run, but meant that instead of each automation suite running the same setup every night, the suite simply had to restore the pre-built database and run with that.

In my current workplace, we operate in a SaaS environment, with all customers using the same database, and a series of impossibly complex processes that handle refreshing and resetting any given customer’s data (there’s a mainframe and virtual 80-character cards involved, of course it’s complex). For that I built in a series of state checks and aborted each test if the company wasn’t in the correct state. It made things more fragile, but it also meant that I knew why those tests failed and could manually adjust the state of the company if needed.

Managing data is a huge challenge, and precisely how data gets managed depends hugely on the context of your AUT. Personally, I’d prefer being able to refresh a database of known standard data, but it’s not always possible or desirable.

1 Like

It’s a great question and something that you get better at with experience after working on multiple projects, writing tests and coming across challenges while scaling up your automated checks.

I would say the ideal situation would be each test should manage its own requirements in terms of what data it needs to run and what state the application needs to be in before the test runs.

For example, the data it might need as a pre requisite is a order or a booking beforehand and the state might need to be that the user is logged in. This might require you have a mechanism to reliably (and repeatably) create these via an API or database calls so that data can be created on the fly when needed. The benefits of that approach is you can be certain the data is isolated between tests, allowing to run tests in parallel in the future.

It should make the test quicker to execute as you don’t need to run through the entire process e2e to get it to the starting state.

Disadvantages of this approach maybe complexity of understanding the data structures and APIs to ensure you accurately create the data in the same way the application would. Also consider how the clean up or tear down step after the test will work - do you need to delete any data you’ve created after the test finishes? What happens if it fails half way through? Will it affect the running of other tests or the next time the same test runs again? Worth considering those as you don’t want to have to manually intervene any time test fails.

Sometimes it’s not always possible to have fully isolated data that you can create on the fly for each test. You may have only a certain subset of data that is used for testing and it must be reused each time the tests are run. In this case it’s even more important to ensure that the starting state and data is in good shape before the test starts and also make sure it’s returned back to a good state at the end of the test ( or potentially at the beginning of the test) so it’s ready to run next time.

Here’s what I was able to come up with:

  • UI State: visual elements on the screen, such as modal windows, notifications, and dynamically loaded content
  • Application State: state of services the application depends on, like background jobs running or paused
  • Network State: state of connectivity, including simulated network conditions for testing offline scenarios, slow connections, or packet loss
  • Device and Browser State: For mobile or web applications, the state of the device or browser itself, including screen size/resolution, orientation, permission settings
  • External Dependencies State: external services that the application integrates with
  • Hardware State: For applications that interact directly with hardware (IoT devices, printers, scanners), the state of the hardware devices
  • Time and Date State: Applications that are sensitive to time or date might require manipulation of system time to test time-based features or expiry scenarios