User Journey Test Assertion Granularity

How granular should assertions be in automated user journey tests?

If a journey includes navigating through 10 pages, for example, should we assert that all the stuff on each page is in the right place? Or should it be closer to merely asserting that the right page displays?

Thats always an interesting question. The amount of checking really depends on the goal of the test what level if test it is, and your risks. And many more factors to be fair.

Is the goal to check all the page content? If so having 10 smaller tests checking a page each may be better.

Do you need to get to page 10 to perform an action and then check? If so maybe the bare minimum check on the 9 pages in the flow is good enough.

In general you want to test one thing. So in short i wouldn’t check everything on every page in one test. Thats going to take a long time to run and be really brittle.

@flynnbops , that resonates with me. But my mind is spinning now that we are experimenting with what people call “E2E tests” or “user journey” tests. They each take so long that breaking the test-one-thing heuristic is tempting. It’s like, “geez it takes 40 seconds to get this far in the journey, we better assert as much as we can since we finally got here.”

But that comes with baggage. If a long user journey is one test, and we assert things along the journey, it starts to feel the same as building a suite of distinct tests that are NOT mutually exclusive. And we have the age old antipattern: If test#1 out of 100 fails, so do the remaining 99 tests, whether the things they were checking worked or not. Ick!

Make sure you don’t test anything twice.
For example: You can check messages & logic on an API level. So you don’t have to do it on the UI.

So those you can already skip on your UI tests, I would try to do as little tests in 1 suite. So if your goal is to reach page 10 and you start at page 1, go through all the pages without asserting, and assert only on page 10 if page 10 has been reached.

Based on the requirement you can put assert, the thing that need to note is where to put soft assertion and hard assertion.

in “soft assertion” automaton test case continue to execute till end and in the END provide what all soft assertions failed , so you have data for all failures in one shot

in “hard assertion” , if we know that if mandatory filed are not filled and you cant go to next page or if something is required on pages and not having that you can’t move forward then user hard assertion.

advice: Automation is doing to perform repetitive task and the beauty is that we have to put all assertions that are overhead by functional tester, at the end we have to give the health of release and if we miss assertion then no point of automating as we can get those bugs from production and again we have to put all those missing assertions :slight_smile:

1 Like

@ejacobson it really is tricky to find a sweetspot. For me it comes to checking things at different levels. If you are doing some e2e / journey tests, then you (probably) only really care about completing that journey successfully. All those other assertions you mention are probably better tested elsewhere.

If that e2e tests fails it should fail because something is wrong with that key journey. Not because of a typo on page 2.

I’ve had good success with breaking suites into different categories and running them in order of importance. e.g. e2e flows, pages look ok tests, etc

If the e2e tests pass then you can run the next batch and so on. You get quicker feedback on the stuff you care about most. And don’t lose anything in terms of coverage.

Alternative approaches are worth considering.

1 Like