What do you share in your automation reports?

Hi lovely people,

We’re now focusing on reporting automation runs and, specifically, what to share. So our question for today is:

What do you share in your automation reports?

I imagine we all have slightly different reports to share: some similarities, some differences. We’d like to learn more about what details you like to capture and share in your reports.

As always, thank you for your contribution :robot:

1 Like

For GUIs I want always screenshots of the occurring error situations because:

  • When it is a bug in the product: Most exceptions and stack traces just give a vague impression of deeper problems in the GUI (aside simple checks). Most times you get from screenshot a good impression how heavily it is really broken (e.g. just a simple lable or a full section gone).
    • Note: check of values work for me often only as bug indicators. They do not show you the full picture if multiple things / more basic things are broken.
  • When it is a bug in your automation code (maybe by being unawarley outdated): You directly see the changed GUI and can draw conclusions from it for further testing.
    • Note: I’m not always informed about changes in the GUI and by this I match some which slip through.

In the Automation report , few input that I am using.

  1. Bird’s eye view - ie executed 100 TC , no of Pass, failed, skipped, un-executed
  2. Also above data in the form of pie-chart
  3. On click on #1 user can see detailed reports based on pass/fail etc
    3.a. Which contains snapshot, any logs that can give an understanding that test case fail is of application or automation flaky test case
    3.b. if 10 test case failed with same exception then club all 10 test case under one bucket so it will be easy to identify which bucket to focus and pick to fix first
  4. Good to have if maintaining past history of same test run execution data and if in need to compare with current run.

In general in all automation reports I include:

  • Suite

  • Number of total tests which ran

  • Number of total tests passed / failed / skipped / na

  • Environment on which the tests ran(versions, devices)

  • Product name

  • Link to execution(can be cloud provider, pipeline job url)

  • Type of build/branch

1 Like

What’s an automation report?

1 Like

Thanks for sharing some examples to add into the upcoming learning journey. There’s a nice mix in here :slight_smile:

1 Like

What about a video? Would you include that in a report to show everything that has occurred before a bug was “caught”?


That would be cool to have! Great idea!
Why not!?
Not only for failed checks/bugs, but overall to take a look how the execution went. Not always, but to have the possibility.
It might reveal other problems in the product which the automation does not catch.


Here’s the list of things I want to see in an automation report:

  • Test suite & test case name
  • Test case description (or at least a link to it, eg in a test management tool)
  • Execution result
  • Execution history (passed/ failed/ etc.), to figure: Is this check failing for the first time? Is it a flaky test?
  • Duration + history (to be able to recognize performance degradation)
  • Logs
  • Data being used (should be visible from the logs)
  • Screenshot/ screenvideo (especially in case of failure)
  • Traces (even better than ‘standalone’ Logs & Screenshots, eg Playwright provides them: Trace viewer | Playwright Java)
  • Environment the check has been executed in (which test environment, but also eg what browser & version? what mobile device & os & version?)
  • Version/ build number of the AUT
  • I want to quickly get to a (high level) overview of all (similar/ related) automated checks: Are those failing as well? - if yes, it could be an issue with the environment/ test data/ etc., and the cause for the failure doesn’t lie within that specific check
1 Like