When setting up the framework or automating a new feature, one of the most important questions is what to automate and what not to automate.
So what parameters are considered to decide that, e.g. number of hours to test it manually is high,etc.
I am considering a framework with reusable components, regardless of the type of framework, that are easy to adapt for new joiners and clear logs that show whether it’s an application or automation issue; otherwise, the team is always in a loop to identify where the test case failed.
For a new feature:
- Business critical feature that adds value & saves time
Critical Paths and most used paths over paths that are rarely used and less impactful.
It also depends on the easiness to automate, if something is hard to automate and flaky implemented, don’t do it.
If you do NOT trust your automated tests and you’ll still manually test the process, then don’t automate it. What’s the point?
Those things I often keep in the back of my head, when deciding to automate.
I assume we are talking about full stack test automation, the one that goes from UI down to DB throughout the whole application.
My approach is to only automate Critical User Journeys. What are they? I do little research, look at data, talk to my teams, understand what are the core user journey of the business that must work no matter what.
Also assuming more full stack or system level testing:
Definitely automate anything that’s identified as critical. Past that, it depends a lot on your project setup and the risk tolerance your project has for different things.
Maybe you are in a regulated environment, and need to be able to show which tests link to which requirements.
Maybe you’re building a web portal that doesn’t really have a lot on the front end, so integration and API tests cover almost everything you care about, so you can get away with very little on the system and UI side.
I do tend to err on the side of “more stable tests are better than fewer”.
The degree of to which I want to run it unattended.
The one extrem are automated checks running on a CI server. That have to be quit stable, “flakiness” is problem here, as they are completely unattended by a human.
The computer need to know (needs to be developed by code) the solution of upcoming problems. By its very nature as code the number of this is limited.
If you go for an attended approach, you can make the computer pause at situations they have trouble with and let the human execute some action. And then continues the computer.
By that humans can improves solutions dynamically, which isn’t the computer able to (or its way more demanding).
Also is the question if you use checks at automation (which are fixed assumptions about the output) or if you use automation as tool to get information from the test object and let humans judge that openly. I call the later semi-automation.
I stole some bits from a talk a colleague once did, all about how you write a list of test cases up, and then size the amount of work. Then you size the possible flakiness of the test (based partly on how long it might take, or just on how bad your test tooling in that specific test domain is), you then add a column to indicate how often the test will actually detect defects. It’s pointless automating a check for a thing that will never fail, almost pointless, but it’s still low value. Add a column for how critical the test is to the business. Add a column for how hard manual testing is of this test-case, and so on, you get the gist. Add all the scores and it will give you a priority of sorts.
However, when you actually implement some of these tests, they will unlock other tests and make them easier to automate. So priorities will shift. I wrote a bit about this in a blog years ago Which Test Cases to Automate