Option A:
Hundreds of fast, atomic API and unit-level autotests running in parallel (takes around 20 minutes to complete with parallelization)
Built on mocks, stubs, prepped test data and DB dumps.
Covering both positive and negative scenarios with solid data coverage.
You can start seeing value (and test results) within a few weeks
Or
Option B:
A single, end-to-end autotest covering the full user journey across multiple features.
For example: registering an account â completing a profile â buying a subscription â interacting with paid content.
Touches frontend, backend, DB, third-party integrations.
Includes creating and verifying data in external systems.
Takes an hour to run, covers mostly the happy path.
You can have it⊠in a few months
Yeah, according to the testing pyramid, ideally youâd want both.
But letâs say you had to choose. Or if you can have both, would you start with the E2E?
Option A, basically for the same reasons, but additionally, I think it would be easier to start with such an approach, easier/cheaper support/update them, if a couple of tests fail, you have all your other tests running and testing stuff, probably theyâll be more stable, easier to pinpoint the paticular bugs (in a function, microservice., module, API endpoint, etc), easier for others to contribute (add additional test cases) even if they are not experienced
Agree, thatâs also one reason for selecting A, with option B things might be not as simple as we assumed⊠and hence it will be more time consuming to maintain and update whole suite of E2E test which will be available for couple of months.
I would base my decision on the question, âWhatâs the most important thing we need feedback on right now, and whatâs the optimal way to achieve that?â Option A is always great to have, as a foundation, but what if there are critical flows which can only be tested at the UI level (assuming thatâs what you mean with E2E)?
I actually assumed a different question, based on the title alone. I thought the question was not between API and unit tests vs E2E, but rather lots of small tests via the UI vs one long test of everything.
In that case, my answer would be to have lots of small tests, with the set up done via APIs. So, for example, if you need a specific object to be created before you start the flow which needs to be tested, create that object via API, rather than going through the UI. This will cut through any issues in what would be preluding UI flows, focussing just on the thing you actually want to test. Or, in your example, buying a subscription would be a separate test than registering an account. It wouldnât reuse / copy anything from account registration, rather the account would be created via API and used in the subscription UI test. That way, even if the account registration flow is broken, the subscription can still be tested.
Not sure whether that answers your question / helps at all. Feel free to follow up.
Technically, you are right, I agree with you, but the problem is that you have to choose because of the limited resources and deadlines, and yeah, I know one e2e âjourneyâ sounds crazy, but letâs say this is based on the real-world situation, and itâs a ârequirementâ to have it especially like that, not like separate smaller e2e tests
if any failure will happen in the journey, you will never know how the down system is working as the journey is broken there and you cannot know further
Good scenario to think about @shad0wpuppet
Key here for me is on Option B : âUser Journeyâ and you also mentionâHappy Pathâ.
As I am sure you are aware, the common entry point for issues is âUsersâ and show me a user that will always use the Happy Path
I tend to look at âRiskâ prioritisation on auto tests. Think of âcritical path /actionâ etc and look at what issue a failure would cause⊠By this I mean inconvenience to the user, loss of reputation and of course loss of money! (add to this data protection also)
Sounds like your scenario is a journey, so in most cases, an E2E is likely a journey of many parts and not in fact a single one, so you would still need to consider the variation of entry and exit criteria on each part of the journey.
In Option B - your most important aspect has to be âRegisterâ, followed by âLog Inâ - without these, the rest is impossible - so this immediately would direct you back to Option A.
Ideally - You should cover both, but focus should be on each independent element within your journey. Fairly sure there is no âfixed pathâ, so you need to check the multiple scenarios.
Hope that helps - there is no easy answer and no way to reduce time and effort on this I think
Iâve been in situations where only option B has been available in the past and I would absolutely prefer to have sufficient coverage around option A.
If you were to ask me what would I focus on first: Option B, since if your E2E happy path isnât working then 99% of your users will be stuck.
BUT for automation coverage I choose option A. Simply because of coverage.
Since you are doing happy & unhappy flows and most likely have some build up parts of your E2E it will have the highest coverage PLUS youâd probably do 1 manual test through the UI doing the happy path. So in order to do a payment youâd have to interact with the previous elements anyways.
Keep in mind option A in the beginning is lean, but over time as you add more tests to it, and unless you optimize the tests as well, over time it can get bloated as well, many many tests, the aggregate of tests takes quite some time to run also, even with parallelization.
However, despite the bloat, it would still be more optimal then the E2E test(s) in general. Just that it isnât necessarily as lean as when things started out.
Option B could be improved (for comparison with option A) with a more modular design, with the complete testing covered by a test suite, rather than a single test of everything. As @cassandrahl mentioned, small tests (via UI). UI tests donât even have to be complete in the sense of the complete UI page under test, in theory, you could also design the tests in a way that you could load part of the UI like a UI widget and test that in isolation from rest of the UI as a form of micro UI tests if the UI is componentized into widgets. You then test the complete UI in a regular UI automation test that tests the complete integration of the page widgets.
Also wanted to point out that option B does not have to be started when the âsystemâ is ready to test. As an example, waiting on UI ready to start UI automation. With the proper collaboration with UX and product teams, maybe with UI team too, you could actual do the test (framework/tooling) preparation during development and do/verify/complete the implementation/integration when the system is about ready. You can predesign the test workflow code and put in place pseudo-code placeholders to fill in the implementation details of the end to end tests later. Those for example being UI element locator values, exact workflow steps (click this, do that, etc.) to finalize that were previously pseudo-code, now translating to the actual function call to UI test tool. Doing it this way, your option B should not lag behind option A by a lot in the sense of having to wait for system ready to start. Since most people start option A as the components are built out - the trick is to try to follow that technique for doing option B as well.
If wanting to tackle both at the same time, it is best you use a framework that supports both to minimize work and maintenance. In theory with a skilled enough team and/or a big enough team, you could tackle both around the same time, though likely with priority of one over the other. But you donât necessarily need to do one after the other in sequence.