Hundreds of fast atomic autotests VS single end-to-end autotest

What would you choose?

Option A:
Hundreds of fast, atomic API and unit-level autotests running in parallel (takes around 20 minutes to complete with parallelization)
Built on mocks, stubs, prepped test data and DB dumps.
Covering both positive and negative scenarios with solid data coverage.
You can start seeing value (and test results) within a few weeks

Or

Option B:
A single, end-to-end autotest covering the full user journey across multiple features.
For example: registering an account → completing a profile → buying a subscription → interacting with paid content.
Touches frontend, backend, DB, third-party integrations.
Includes creating and verifying data in external systems.
Takes an hour to run, covers mostly the happy path.
You can have it
 in a few months

Yeah, according to the testing pyramid, ideally you’d want both.
But let’s say you had to choose. Or if you can have both, would you start with the E2E?

4 Likes

I would select Option A, and the reason is :

  1. It will be available soon
  2. It covers both positive and negative scenarios
  3. Fast availability means we can share feedback soon, and hence if there is scope for improvement, then the team can improve soon
  4. Since they are unit-level so I’m assuming they will be small test case, so even if their quantity is higher still we can manage them easily

But I would love to see what would you choose if you get same question with same options and why?

2 Likes

Option A, basically for the same reasons, but additionally, I think it would be easier to start with such an approach, easier/cheaper support/update them, if a couple of tests fail, you have all your other tests running and testing stuff, probably they’ll be more stable, easier to pinpoint the paticular bugs (in a function, microservice., module, API endpoint, etc), easier for others to contribute (add additional test cases) even if they are not experienced

1 Like

Agree, that’s also one reason for selecting A, with option B things might be not as simple as we assumed
 and hence it will be more time consuming to maintain and update whole suite of E2E test which will be available for couple of months.

1 Like

I would choose a combination of both.

  • While the first one is more towards what dev generally does n the form of unit test.
  • Though it has APIs as well which is good place to start and get quick results

But in my honest opinion, second option is created as a bias giving more priority to the first one.

  • Generally people have end to end test but is that a single(1 test) that we do with UI automation tools ?
  • We will have multiple test for various areas be it buying a sub or others
  • I would do this because it actually confirms how the user uses the system

if things fail, first can tell me where the problem is, second can tell me which all possible scenarios got affected.

1 Like

I would base my decision on the question, “What’s the most important thing we need feedback on right now, and what’s the optimal way to achieve that?” Option A is always great to have, as a foundation, but what if there are critical flows which can only be tested at the UI level (assuming that’s what you mean with E2E)?

I actually assumed a different question, based on the title alone. I thought the question was not between API and unit tests vs E2E, but rather lots of small tests via the UI vs one long test of everything.

In that case, my answer would be to have lots of small tests, with the set up done via APIs. So, for example, if you need a specific object to be created before you start the flow which needs to be tested, create that object via API, rather than going through the UI. This will cut through any issues in what would be preluding UI flows, focussing just on the thing you actually want to test. Or, in your example, buying a subscription would be a separate test than registering an account. It wouldn’t reuse / copy anything from account registration, rather the account would be created via API and used in the subscription UI test. That way, even if the account registration flow is broken, the subscription can still be tested.

Not sure whether that answers your question / helps at all. Feel free to follow up.

1 Like

Technically, you are right, I agree with you, but the problem is that you have to choose because of the limited resources and deadlines, and yeah, I know one e2e ‘journey’ sounds crazy, but let’s say this is based on the real-world situation, and it’s a ‘requirement’ to have it especially like that, not like separate smaller e2e tests

1 Like

if its like 0 or 1

then don’t choose a) because

  • if any failure will happen in the journey, you will never know how the down system is working as the journey is broken there and you cannot know further
  • if sign in is failing you cannot test other pages
  • in individual API cases you can.

So i would go with approach 1 in that case

1 Like

Good scenario to think about @shad0wpuppet
Key here for me is on Option B : ‘User Journey’ and you also mention’Happy Path’.

As I am sure you are aware, the common entry point for issues is ‘Users’ and show me a user that will always use the Happy Path :laughing:

I tend to look at “Risk” prioritisation on auto tests. Think of ‘critical path /action’ etc and look at what issue a failure would cause
 By this I mean inconvenience to the user, loss of reputation and of course loss of money! (add to this data protection also)

Sounds like your scenario is a journey, so in most cases, an E2E is likely a journey of many parts and not in fact a single one, so you would still need to consider the variation of entry and exit criteria on each part of the journey.

In Option B - your most important aspect has to be ‘Register’, followed by ‘Log In’ - without these, the rest is impossible - so this immediately would direct you back to Option A.

Ideally - You should cover both, but focus should be on each independent element within your journey. Fairly sure there is no ‘fixed path’, so you need to check the multiple scenarios.

Hope that helps - there is no easy answer and no way to reduce time and effort on this I think

1 Like

I’ve been in situations where only option B has been available in the past and I would absolutely prefer to have sufficient coverage around option A.

Option B could also be covered manually.

2 Likes

If you were to ask me what would I focus on first: Option B, since if your E2E happy path isn’t working then 99% of your users will be stuck.

BUT for automation coverage I choose option A. Simply because of coverage.
Since you are doing happy & unhappy flows and most likely have some build up parts of your E2E it will have the highest coverage PLUS you’d probably do 1 manual test through the UI doing the happy path. So in order to do a payment you’d have to interact with the previous elements anyways.

Option A still :stuck_out_tongue:
Building via iterative process

1 Like

Keep in mind option A in the beginning is lean, but over time as you add more tests to it, and unless you optimize the tests as well, over time it can get bloated as well, many many tests, the aggregate of tests takes quite some time to run also, even with parallelization.

However, despite the bloat, it would still be more optimal then the E2E test(s) in general. Just that it isn’t necessarily as lean as when things started out.

Option B could be improved (for comparison with option A) with a more modular design, with the complete testing covered by a test suite, rather than a single test of everything. As @cassandrahl mentioned, small tests (via UI). UI tests don’t even have to be complete in the sense of the complete UI page under test, in theory, you could also design the tests in a way that you could load part of the UI like a UI widget and test that in isolation from rest of the UI as a form of micro UI tests if the UI is componentized into widgets. You then test the complete UI in a regular UI automation test that tests the complete integration of the page widgets.

Also wanted to point out that option B does not have to be started when the “system” is ready to test. As an example, waiting on UI ready to start UI automation. With the proper collaboration with UX and product teams, maybe with UI team too, you could actual do the test (framework/tooling) preparation during development and do/verify/complete the implementation/integration when the system is about ready. You can predesign the test workflow code and put in place pseudo-code placeholders to fill in the implementation details of the end to end tests later. Those for example being UI element locator values, exact workflow steps (click this, do that, etc.) to finalize that were previously pseudo-code, now translating to the actual function call to UI test tool. Doing it this way, your option B should not lag behind option A by a lot in the sense of having to wait for system ready to start. Since most people start option A as the components are built out - the trick is to try to follow that technique for doing option B as well.

If wanting to tackle both at the same time, it is best you use a framework that supports both to minimize work and maintenance. In theory with a skilled enough team and/or a big enough team, you could tackle both around the same time, though likely with priority of one over the other. But you don’t necessarily need to do one after the other in sequence.

1 Like