Automation of critical tests only

One of the talks at Testbash Thursday (the one by Areti in the afternoon) got me thinking:

Does anyone automate critical tests only - i.e. only things that would prevent release to production? We potentially automate lots of tests but then if they fail do we spend time actually figuring out what the impact actually is on the release (i.e. will it prevent it or not). Does anyone perhaps organise their automated tests into different packs to enable easier identification of impact (e.g. critical tests in one pack, medium priority, lower priority in different ones).

By automating tests you mean all-level tests or only end-to-end? If you only referring to e2e I would definitely focus first on automating checks of “critical” paths. If after that you still have time I would rather spend the rest of the time trying to improve test coverage at lower-levels (api, integration, unit) rather than adding more coverage on the e2e level.

2 Likes

Absolutely! Automating everything is not only incredibly expensive to implement and maintain, but the amount of redundancy it implies is tremendously wasteful.

What to automate should be as active a decision as what to test - both require a sense of risk in our systems, and other rationalisation skills testers need to develop. Great that you’re considering this.

5 Likes

I use tests automatisation. We develop a healthcare app. It works well.

@pasku_lh @wildtests Thanks for replies. I’ll give some basic examples.

For an ecommerce site we have user registration. Some of the requirements involved in the registration include - chosen password and confirm password must match, passwords must conform to security criteria, user must input an email address in valid format [Something]@[something].[something].

Critical test would user can complete a registration process.

It is very easy to start automating the other examples I gave but are they critical ??? (exception maybe the password strength). It’s been tested during the system testing phase and works.

For example, if there’s a regression in the area and someone doesn’t enter the same password in password/confirm password boxes , is this critical? No. Same if someone doesn’t enter a valid email (if they don’t enter valid email then it is their fault). We have options later of either doing our periodic session based regression testing - or, if someone reports in production then we can correct (it’s not the end of the world and can be fixed if need be).

I’m looking at potentially creating builds which test critical tests only (if any of these fail then we can’t release) and then have some other supplementary builds for the other tests we automate and run these intermittently.

Thinking about your automation strategy is essential.
You may find several resources about it, for example this talk from Jack Taylor at TestBash Germany 2019: https://www.ministryoftesting.com/dojo/series/testbash-germany-2019/lessons/when-to-say-no-to-automation-jack-taylor
If you want to start with very basic automated checks (i.e. for smoke testing) to give you some peace of mind, then great. Most times you may find yourself doing those checks by yourself plenty of times, which is not very productive.

Before automated tests, we prioritized tests to know which ones needed to be run first. Essentially, an application would get written, we might run the tests which made sense for the application thus far. At first this might be manually testing 20 things. As more features got added, we’d have more things to test.

At some point, it might be that tests are now taking weeks to execute. The Project Manager would ask my opinion on whether we were ready to ship it. If I just ran through the tests in the order they were developed, I might only have executed 60% of the tests. If I organized the tests by priority, most important/critical tests first then I might have 30% critical, 45% important but can be fixed after release, 25% are nice to have.

Essentially, it was a sliding scale. I won’t say all critical tests are executed or all critical and important. It would be more the top 60% of the tests had been executed.

Now that I automate tests I can actually get more testing done. A feature comes out, I test it and automate it. When feature 2 comes out, I might do a little maintenance on feature 1 automation and implement feature 2 automation. In the final iteration I might be able to say 90% of the tests have been run and, for the most part, they are all passing.

Bottom line, regardless of automated or manual, if you run out of time to test everything, it is important to know you have tested the most critical aspects of the application.

I’d never say I’m just going to automate critical tests. I’m more inclined to plan on automating everything BUT I might end up automating just the critical tests and my remaining effort goes somewhere else (or I just don’t have time for anything else).

Additionally, I have tagged tests for easier identification. Ideally, I want to run tests when a developer checks in code. I do not want the developer watching the build for 10 minutes (or longer) waiting for all the tests to pass. If it takes too long, a developer just won’t wait. If you can, keep tests under 5 minutes. Really, if you can keep them under 3 minutes is even better.

But if you have different types of tests and the entire test suite is taking longer and longer, I’d use tags to determine what is critical and only gets run on check in, what is longer running and tested nightly. I’ve had test suites which took 2 days to complete. So a lot of it would run on weekends. So really, I try to automate everything but many I only run the less critical automation on a nightly or weekly basis. We want to test often (fail fast) but sometimes that just isn’t realistic.

Finally, by automation, I’m talking about unit level, API, contract, service, integration, UI, system, e2e, etc… If a defect would get caught by say a unit level test then I’d NEVER automate it at a higher level.

1 Like

I did that as well using protractor. Works smoothly.