What do you share in your automation reports?

Hi lovely people,

We’re now focusing on reporting automation runs and, specifically, what to share. So our question for today is:

What do you share in your automation reports?

I imagine we all have slightly different reports to share: some similarities, some differences. We’d like to learn more about what details you like to capture and share in your reports.

As always, thank you for your contribution :robot:

1 Like

For GUIs I want always screenshots of the occurring error situations because:

  • When it is a bug in the product: Most exceptions and stack traces just give a vague impression of deeper problems in the GUI (aside simple checks). Most times you get from screenshot a good impression how heavily it is really broken (e.g. just a simple lable or a full section gone).
    • Note: check of values work for me often only as bug indicators. They do not show you the full picture if multiple things / more basic things are broken.
  • When it is a bug in your automation code (maybe by being unawarley outdated): You directly see the changed GUI and can draw conclusions from it for further testing.
    • Note: I’m not always informed about changes in the GUI and by this I match some which slip through.
2 Likes

In the Automation report , few input that I am using.

  1. Bird’s eye view - ie executed 100 TC , no of Pass, failed, skipped, un-executed
  2. Also above data in the form of pie-chart
  3. On click on #1 user can see detailed reports based on pass/fail etc
    3.a. Which contains snapshot, any logs that can give an understanding that test case fail is of application or automation flaky test case
    3.b. if 10 test case failed with same exception then club all 10 test case under one bucket so it will be easy to identify which bucket to focus and pick to fix first
  4. Good to have if maintaining past history of same test run execution data and if in need to compare with current run.
1 Like

In general in all automation reports I include:

  • Suite

  • Number of total tests which ran

  • Number of total tests passed / failed / skipped / na

  • Environment on which the tests ran(versions, devices)

  • Product name

  • Link to execution(can be cloud provider, pipeline job url)

  • Type of build/branch

2 Likes

What’s an automation report?

2 Likes

Thanks for sharing some examples to add into the upcoming learning journey. There’s a nice mix in here :slight_smile:

1 Like

What about a video? Would you include that in a report to show everything that has occurred before a bug was “caught”?

3 Likes

That would be cool to have! Great idea!
Why not!?
Not only for failed checks/bugs, but overall to take a look how the execution went. Not always, but to have the possibility.
It might reveal other problems in the product which the automation does not catch.

2 Likes

Here’s the list of things I want to see in an automation report:

  • Test suite & test case name
  • Test case description (or at least a link to it, eg in a test management tool)
  • Execution result
  • Execution history (passed/ failed/ etc.), to figure: Is this check failing for the first time? Is it a flaky test?
  • Duration + history (to be able to recognize performance degradation)
  • Logs
  • Data being used (should be visible from the logs)
  • Screenshot/ screenvideo (especially in case of failure)
  • Traces (even better than ‘standalone’ Logs & Screenshots, eg Playwright provides them: Trace viewer | Playwright Java)
  • Environment the check has been executed in (which test environment, but also eg what browser & version? what mobile device & os & version?)
  • Version/ build number of the AUT
  • I want to quickly get to a (high level) overview of all (similar/ related) automated checks: Are those failing as well? - if yes, it could be an issue with the environment/ test data/ etc., and the cause for the failure doesn’t lie within that specific check
3 Likes

Putting my report into the context to fit my current role. :

I work with some very old windows forms as UIs so I cancelled the automation there. The UI doesn’t change enough to spend time on automating tests, so reliance is on manual testing.

The applications are data heavy with a lot of complex calculations being written to SQL Server. There is also a mechanism to pull prices from Exchanges.

The tests are written in .Net/c# and run at night as part of a Regression Suite.

My reporting is thus 2 fold; Team City reports and purpose-built logs.
When a failure occurs I look for the following information:

  • What failed? Team City will report the Test’s namespace- classname- method - iteration
  • Why did it fail ? TC will report any environmental anomolies or give feedback in the test running a set of ‘expected results’ against the test results and TC will detail any anomolies
  • To back this all up, I will have logging throughout with a level switch. If needed the Test can be re-run with a more granular logging level.
  • TC does have an automatic ‘Flaky’ status it can give, but I tend to ignore this and make that decision myself.
  • Logs are developed my myself so of course reporting what I see as necessary; Date/Time, Where I am in the test, state of any variables at that point etc.

If there is one additional piece of information I would like automated is to tie to any changes made that day to a failure whereas this is a manual process.
The information given is normally enough for me to work, this is just a wish(or pure laziness :smiley: )

2 Likes

Activity 2.5.2: What Would Be Your Ideal Basic Report

Imagine your build has failed, the automated checks have recorded failures.

Think about what information you would like to be sent when the build fails to diagnose issues.

  • Time of failure - this would be useful to marry up with the application itself, you could go and look in the log files in the app too for example.
  • New failure - it would be interesting to know whether or not this was a new failure in this build, or had failed before.
  • Test data used in the test - seeing the test data requirements of the test can give you a hint as where to start looking.
  • Successful before and afters in other tests - did the other tests run and clean up after themselves, or did they interfere with this test?
  • The last successful step before the failure - with E2E tests, they often say X element cannot be found when there is a network problem. The last successful step tells you where to start from.
  • Environment and run config - its on staging for example, running against unmocked dependencies, in headless mode with this timeout etc, etc.

Write down the different types of data you would like to have been sent.

  • Log files (from the tests and the app itself)
  • Screenshots for UI type tests
  • Network traffic
  • Results of Lighthouse type tooling (when combined with UI tests can be quite powerful)
  • Link to the build
  • Link to the artefact or commit where the build failed (maybe you could reproduce it locally if you knew)
1 Like