What issues are your automated tests finding?

I have a few questions to folk who are quality / testing specialists who do a lot of automated testing.

  1. What level of tests are you writing? UI E2E focused? API level? etc.
  2. What issues are you catching with these tests?
  3. What issues are you needing to catch with hands on testing? (are you spending the time to catch those?)
  4. How much of your time is devoted to writing these tests Vs maintaining these test Vs hands on testing?

Thanks!

1 Like

@oxygenaddict,

In my experience, automated testing in the past was the mother of discovering hundreds of regression bugs, broken workflows, API contract changes, UI rendering glitches, and little validation errors that sneak past each release.

We try to maintain a certain balance all across the test levels:

API tests come first, as they are faster and give us confidence in core functionalities.

There are fewer UI E2E tests but those are highly targeted toward some critical user flows like login, checkout, and data submission.

There are some integration tests where several services talk to each other to catch data mismatch or sequencing issues.

But as good as automation is, it doesn’t catch everything. The human factor is excellent at usability testing and exploratory checks, as well as visual consistency and edge cases, where human logic is important. For instance, automation wouldn’t catch that the UI “feels off” or that a particular workflow is just confusingly repeated.

If I had to put a split in a time-lapse view, it would be something like this:

  • 40% writing tests for the new stuff that comes along,
  • 30% keeping flaky and non-reliable tests alive (because products keep changing).
  • 30% hands-on testing (exploratory, adhoc, checks before releases)

Release cycles change, and so does the ratio; it happens that automation drives out the mundane checks so that I have time for more exploratory ones.

2 Likes

Those are cracking questions that have motivated me to look deeper into this in my organisation being a stats nut :grin: . I often feel I have to explain the difference to those outside QA who often give the mantra “Why don’t you automate everything?” between the motivation between automated tests and exploratory.

In my opinion, automated tests are written to prove stuff works. Exploratory tests is searching for stuff that doesn’t work. So on point 2, I’m quite comfortable that automated tests find substantially less issues, because they’re not created explicitly to find bugs.

For point 1, we write UI/API and even DB (because one of our products is a Data Warehouse). We didn’t necessarily set out to live by the testing pyramid but when you look at the number of tests assets, DB and API are by far the most, so its kinf of happened organically.

For point 3, we don’t have a specific plan to capture types of issues. But the weaknesses we go after are usually around feature definition (are the benefits for this feature really there?), impact to other components and interfaces beyond the devs remit and common areas of defects.

For point 4?……not a clue :laughing: Unless you do timesheets and can 100% trust that data…its all going to be subjective.

So far our automation is pretty low maintenance, writing is not as high as I’d like but enough that there is clear benefit on key products and hands on testing vs automated execution is about 50/50. Thats a very rough estimate.

There is usually a lifecycle aspect to what your automation will find.

Developers can start with automation before code, they are going to find a lot of things this way.

Automation used to experiment or find a specific risk, so exploratory automation, for example a race condition, a big data usage or a security risk investigation. These still tend to be fairly hands on but will try and catch the risk they are designed to catch.

Automated system monitoring for a specific risk can fall into this category of focused discovery, system getting overloaded or a ddos attack monitoring, you usually have these because you know the risk still exists and you have accepted the risk rather than remove it which may not have been possible.

The often common regression suite. When you are building a test suite here its very likely you will find a lot of issues during the suite creation, many different types of risks depending on the tests themselves. Once its up and running though that risk switches to primarily regression risk.

That primary singular regression risk coverage tends to create significant debate. For the following reason, someone says they are 100% automation can be in effect saying they only focused on regression risk at a cost of all other risks being ignored. Now extend that to the time question, even 50% spent on regression risk compared to other risks could be seen as absurd to many, this is common though.

The time question then extends again. So you have regression vs all other risks, then on top of that you have known very well risk which you automate versus things not so known that benefit from hands on learning and experimentation. The known vs unknown effort will change during a products lifecycle so unknown as product starts but as more things become known the balance will often shift, sometimes to that 70% regression risk focus.