How do you report on your automated tests?

I saw an excellent discussion pop up recently that I thought really needed a home on The Club.

I’m wondering how you all manage reporting on your automated tests. How do you report on what features are tested, or monitor performance over given areas of code over time? Right now the only metric we have over our automated tests is code coverage and how often our entire builds fail, but nothing more granular.

I suppose at the end of the day what I’m getting at is, I’d love to be able to say that feature X fails its tests 40% of the time and could be considered for refactoring. I’m just not really sure how to get there when we just have a bundle of tests that we presume are testing… something.

What advice would you have for the original poster?

I have used allure for reporting and pytest testing framework, within the allure pytest plugin you add @allure.feature decorator which marks a test function (dont know if can use for test class also).

allure reporting is language agnostic so many other languages like Java ,etc have some type of allure library that allows to extending testing to include allure features.

if use Jenkins for CI, it has an allure plugin for reporting as well: https://www.qaautomation.co.in/2018/12/allure-report-integration-with-jenkins.html

1 Like

I saw a similar discussion pop up again this week on this topic. Thank you @kamster for your links so far.

Anyone else got other approaches to reporting on their automation results?

I’ll be looking into allure now, seems popular.

I have tests running in Jenkins, and have found the cucumber reports plugin really useful. The trend information and nicely presented report works really well in my opinion.


Screenshot of cucumber report plugin trend information

1 Like