What does a good automated E2E report look like?

A good automated end-to-end (E2E) test report for a software application typically contains a lot of information.

These tests reports are usually automated and delivered to you as a report in a dashboard or similar.

What are the important aspects of a test? I’ll go first from my own experience:

  • Status: Did the test pass or fail? Including data on tests that passed or failed. If it failed, what kind of failure was it?
  • Test environment specs: Where was the test run and under what circumstances? Including evidence of environment
  • Details: Video, screenshots and detailed logs for failed tests (some might be false positives)
  • Team collab area: A way to take action immediately when a test failed, can I @ a team-mate to ask them to resolve it? Maybe an integration that creates a ticket from a failed test?

What else do you expect from a test report generated automatically by your e2e runs?

Thank you,

3 Likes

You have a great list!!! Really love the team collab area, for me this is just slack (utilizing emoji’s and threads to collaborate)

  • Inspectability, deeper than just a screenshot or video, but being able to view the network requests to further inspect what happened (much like chrome dev tools) on a previous run is a must have for me going forward!
  • Seeing previous runs to look for trending flakey or failed tests is helpful to review once a month or so
  • Ability to quickly get a list of the failures (copy/paste) to attempt to run these on my local environment to quickly debug, (rather than re-running each failed test individually) is a big time-saver that can be built out from a report.
1 Like

yep, that’s the biggest thing for me that most dashboards miss, reports are useless without comparisons, videos and network packet inspections are pointless if you cannot see the packets from a week ago side by side.

2 Likes

How do you compare 2 or more runs though? Curious what you are currently using for comparing tests today? Is there a tool or is it a manual process?

Annoyingly, very manual and because when you are comparing 2 runs of for example release and dev branches you end up with 2 reports open in 2 browser tabs and the reports don’t tell you which build they are from anymore because they are just dumb HTML pages without the metadata that links them to the artefact in plain sight. Reports done well are hard, very hard and I’m dreading having to explain how our performance testing dashboard is totally pointless… for one reason: no more than one or two persons have even looked at the board for the entire year.

Holding them side by side feels old fashioned, and does not scale when you really want to compare more than 2 reports. We just don’t get forced to do it often enough to come up with a better way?

1 Like

I really like this style of report (unrelated to testing): UserBenchmark: Nvidia RTX 3060-Ti vs 4060

That’s how I would build a report in the future :slight_smile:

1 Like