We’re now focusing on reporting automation runs and, specifically, what to share. So our question for today is:
What do you share in your automation reports?
I imagine we all have slightly different reports to share: some similarities, some differences. We’d like to learn more about what details you like to capture and share in your reports.
For GUIs I want always screenshots of the occurring error situations because:
When it is a bug in the product: Most exceptions and stack traces just give a vague impression of deeper problems in the GUI (aside simple checks). Most times you get from screenshot a good impression how heavily it is really broken (e.g. just a simple lable or a full section gone).
Note: check of values work for me often only as bug indicators. They do not show you the full picture if multiple things / more basic things are broken.
When it is a bug in your automation code (maybe by being unawarley outdated): You directly see the changed GUI and can draw conclusions from it for further testing.
Note: I’m not always informed about changes in the GUI and by this I match some which slip through.
In the Automation report , few input that I am using.
Bird’s eye view - ie executed 100 TC , no of Pass, failed, skipped, un-executed
Also above data in the form of pie-chart
On click on #1 user can see detailed reports based on pass/fail etc
3.a. Which contains snapshot, any logs that can give an understanding that test case fail is of application or automation flaky test case
3.b. if 10 test case failed with same exception then club all 10 test case under one bucket so it will be easy to identify which bucket to focus and pick to fix first
Good to have if maintaining past history of same test run execution data and if in need to compare with current run.
That would be cool to have! Great idea!
Why not!?
Not only for failed checks/bugs, but overall to take a look how the execution went. Not always, but to have the possibility.
It might reveal other problems in the product which the automation does not catch.
Environment the check has been executed in (which test environment, but also eg what browser & version? what mobile device & os & version?)
Version/ build number of the AUT
I want to quickly get to a (high level) overview of all (similar/ related) automated checks: Are those failing as well? - if yes, it could be an issue with the environment/ test data/ etc., and the cause for the failure doesn’t lie within that specific check