In my company (Web application), teams vary in their testing ratios; some lack unit or end-to-end tests altogether. What would be a healthy default ratio if I were to suggest one for the all the teams to follow?From your personal experience, which approach do you recommend to minimize critical bugs in live environment?
EDIT - Most of the teams donāt have dedicated testers. We have over ten product teams supported by two QAs.
Do good testing no matter the automation.
Try the damn thing in different variations. Automation often lacks variation from a user perspective (while some automation can help on data variation)
About the ratio of automation:
There are sadly some lazy developers who and have to be pushed to do unit checks. If they can get away with not doing something like this I wonder about your company culture. Just a tester caring for this speaks for itself. This should be basic for developers and not be your problem.
At my current company do the following two things (it is a data heavy product and most things happen on the server):
a āE2Eā automation by APIs. This executes in reasonable scenarios ābusiness logicā at our server and checks some outcomes
dedicated UI automation which is just intended to check that the client still works. Different views and windows can be opened and core elements are available. We do not trigger data-heavy business logic here (maybe we could click related buttons, but I would not relay heavily on the outcome).
I donāt like referring to it as āThe testing pyramidā, it is not āTheā it is āa pyramidā.
I doubt that one single ratio for all teams would be good. Every team needs to find its own.
But āIām lazy therefore I donāt careā is not a good reasoning.
According to the Test Pyramid, it is:
Units 70-80%
Integration 15-20%
e2e 5-10%
Unit tests are quick to execute, relatively easy to maintain, implement and find bugs at the unit level. Having more unit tests ensures that each component functions correctly in isolation, catching errors early in the dev process. Integration tests are more complex than unit tests but they are crucial for ensuring that all components/services of the system work correctly. E2E simulates real user scenarios from start to finish, they are the most expensive to write and maintain. Usually limiting them to critical user flows can help manage team resources.
Based on my experience, achieving 80% of code coverage through unit testing is a feasible task, and this ratio is quite feasible too. Itās worth noting that e2e and integration testing are often combined or switched in places, so in my opinion, itās acceptable to have 15-20% of e2e tests in your testing pyramid. But itās not ideal to have integration + e2e tests that are more or relatively the same percentage as unit tests. I have seen such cases, and while they have their drawbacks, sometimes QA has more resources than developers, and they prefer to cover as many e2e flows as possible, not just critical flows. As I previously mentioned, there is a best practice ratio, but real-life situations may be different and with entirely different proportions. Sometimes, instead of integration testing, you can use contract testing, and you may have a certain number of functional tests that are not unit tests but cover really simple and isolated cases.
Concentrate not on the ideal ratio or best practices but on aspects of your teams, products, resources you have, and things that really work for your situation, help build quality software and effective for business goals.
Most of the teams donāt have dedicated testers. We have over ten product teams supported by two QAs.
Then developers will write all the automated tests because obviously, you wonāt have time for this you can only give them priorities and guidance according to your situation
Donāt add new code or change as little code as possible;
Donāt release product code live;
Release features that clients donāt use;
Donāt use a strict process that fragments work, restricts a human thinking flow, and consistently interrupts random parts of development(like some strictly forced Agile);
Have people who are happy and involved in their work, who will look for the best in themselves and believe in the product they develop;
Donāt give clients a way to report bugs(so you wonāt see any);
Fix the critical bugs, donāt just add them to the work backlog, then release;
Test less, so youāll never be aware of any critical bugs you might release;
Automation is code(except itās work done on the product that sells, but on a separate one).
I consider usually in this equation: the time required to add the automation, the costs of this, the opportunity costs, the expertise of the developers doing it, the risks, the maintenance costs, the company image, client contracts/business value.
Iād start small, and evaluate continuously. Sometimes none is the answer, other times the automation cost is 10 times more than the actual product code change.
The level at which one automates is dependent on extra technical factors after the above evaluation(some layers might be more costly, others more useful, others easier, others unreliable).
Iāve gathered information on the types of tests used by different teams and tried to see if thereās any impact on bugs or incidents. However, I couldnāt find any correlation from the data. Interestingly, teams with extensive testing still have high incident rates. Should I focus on test coverage instead?
Any thoughts on this?