How do you determine the ratio between unit, integration and end-to-end tests?

In my company (Web application), teams vary in their testing ratios; some lack unit or end-to-end tests altogether. What would be a healthy default ratio if I were to suggest one for the all the teams to follow?From your personal experience, which approach do you recommend to minimize critical bugs in live environment?

EDIT - Most of the teams don’t have dedicated testers. We have over ten product teams supported by two QAs.

5 Likes

Do good testing no matter the automation.
Try the damn thing in different variations. Automation often lacks variation from a user perspective (while some automation can help on data variation)

About the ratio of automation:

  • There are sadly some lazy developers who and have to be pushed to do unit checks. If they can get away with not doing something like this I wonder about your company culture. Just a tester caring for this speaks for itself. This should be basic for developers and not be your problem.
  • Instead of the pyramid I suggest you this sphere model Round Earth Test Strategy - Satisfice, Inc.
  • At my current company do the following two things (it is a data heavy product and most things happen on the server):
    • a ā€œE2Eā€ automation by APIs. This executes in reasonable scenarios ā€˜business logic’ at our server and checks some outcomes
    • dedicated UI automation which is just intended to check that the client still works. Different views and windows can be opened and core elements are available. We do not trigger data-heavy business logic here (maybe we could click related buttons, but I would not relay heavily on the outcome).

One more perspective: https://mrslavchev.com/2018/05/31/hindsight-lessons-about-automation-models-of-automation/

I don’t like referring to it as ā€œThe testing pyramidā€, it is not ā€œTheā€ it is ā€œa pyramidā€.

I doubt that one single ratio for all teams would be good. Every team needs to find its own.
But ā€œI’m lazy therefore I don’t careā€ is not a good reasoning.

2 Likes

According to the Test Pyramid, it is:
Units 70-80%
Integration 15-20%
e2e 5-10%

Unit tests are quick to execute, relatively easy to maintain, implement and find bugs at the unit level. Having more unit tests ensures that each component functions correctly in isolation, catching errors early in the dev process.
Integration tests are more complex than unit tests but they are crucial for ensuring that all components/services of the system work correctly.
E2E simulates real user scenarios from start to finish, they are the most expensive to write and maintain. Usually limiting them to critical user flows can help manage team resources.

Based on my experience, achieving 80% of code coverage through unit testing is a feasible task, and this ratio is quite feasible too. It’s worth noting that e2e and integration testing are often combined or switched in places, so in my opinion, it’s acceptable to have 15-20% of e2e tests in your testing pyramid. But it’s not ideal to have integration + e2e tests that are more or relatively the same percentage as unit tests. I have seen such cases, and while they have their drawbacks, sometimes QA has more resources than developers, and they prefer to cover as many e2e flows as possible, not just critical flows. As I previously mentioned, there is a best practice ratio, but real-life situations may be different and with entirely different proportions. Sometimes, instead of integration testing, you can use contract testing, and you may have a certain number of functional tests that are not unit tests but cover really simple and isolated cases.

Concentrate not on the ideal ratio or best practices but on aspects of your teams, products, resources you have, and things that really work for your situation, help build quality software and effective for business goals.

Most of the teams don’t have dedicated testers. We have over ten product teams supported by two QAs.

Then developers will write all the automated tests because obviously, you won’t have time for this you can only give them priorities and guidance according to your situation

3 Likes
  • Don’t add new code or change as little code as possible;
  • Don’t release product code live;
  • Release features that clients don’t use;
  • Don’t use a strict process that fragments work, restricts a human thinking flow, and consistently interrupts random parts of development(like some strictly forced Agile);
  • Have people who are happy and involved in their work, who will look for the best in themselves and believe in the product they develop;
  • Don’t give clients a way to report bugs(so you won’t see any);
  • Fix the critical bugs, don’t just add them to the work backlog, then release;
  • Test less, so you’ll never be aware of any critical bugs you might release;

Automation is code(except it’s work done on the product that sells, but on a separate one).
I consider usually in this equation: the time required to add the automation, the costs of this, the opportunity costs, the expertise of the developers doing it, the risks, the maintenance costs, the company image, client contracts/business value.
I’d start small, and evaluate continuously. Sometimes none is the answer, other times the automation cost is 10 times more than the actual product code change.
The level at which one automates is dependent on extra technical factors after the above evaluation(some layers might be more costly, others more useful, others easier, others unreliable).

1 Like

I’ve gathered information on the types of tests used by different teams and tried to see if there’s any impact on bugs or incidents. However, I couldn’t find any correlation from the data. Interestingly, teams with extensive testing still have high incident rates. Should I focus on test coverage instead?
Any thoughts on this?