Before automated tests, we prioritized tests to know which ones needed to be run first. Essentially, an application would get written, we might run the tests which made sense for the application thus far. At first this might be manually testing 20 things. As more features got added, we’d have more things to test.
At some point, it might be that tests are now taking weeks to execute. The Project Manager would ask my opinion on whether we were ready to ship it. If I just ran through the tests in the order they were developed, I might only have executed 60% of the tests. If I organized the tests by priority, most important/critical tests first then I might have 30% critical, 45% important but can be fixed after release, 25% are nice to have.
Essentially, it was a sliding scale. I won’t say all critical tests are executed or all critical and important. It would be more the top 60% of the tests had been executed.
Now that I automate tests I can actually get more testing done. A feature comes out, I test it and automate it. When feature 2 comes out, I might do a little maintenance on feature 1 automation and implement feature 2 automation. In the final iteration I might be able to say 90% of the tests have been run and, for the most part, they are all passing.
Bottom line, regardless of automated or manual, if you run out of time to test everything, it is important to know you have tested the most critical aspects of the application.
I’d never say I’m just going to automate critical tests. I’m more inclined to plan on automating everything BUT I might end up automating just the critical tests and my remaining effort goes somewhere else (or I just don’t have time for anything else).
Additionally, I have tagged tests for easier identification. Ideally, I want to run tests when a developer checks in code. I do not want the developer watching the build for 10 minutes (or longer) waiting for all the tests to pass. If it takes too long, a developer just won’t wait. If you can, keep tests under 5 minutes. Really, if you can keep them under 3 minutes is even better.
But if you have different types of tests and the entire test suite is taking longer and longer, I’d use tags to determine what is critical and only gets run on check in, what is longer running and tested nightly. I’ve had test suites which took 2 days to complete. So a lot of it would run on weekends. So really, I try to automate everything but many I only run the less critical automation on a nightly or weekly basis. We want to test often (fail fast) but sometimes that just isn’t realistic.
Finally, by automation, I’m talking about unit level, API, contract, service, integration, UI, system, e2e, etc… If a defect would get caught by say a unit level test then I’d NEVER automate it at a higher level.