I saw an excellent discussion pop up recently that I thought really needed a home on The Club.
I’m wondering how you all manage reporting on your automated tests. How do you report on what features are tested, or monitor performance over given areas of code over time? Right now the only metric we have over our automated tests is code coverage and how often our entire builds fail, but nothing more granular.
I suppose at the end of the day what I’m getting at is, I’d love to be able to say that feature X fails its tests 40% of the time and could be considered for refactoring. I’m just not really sure how to get there when we just have a bundle of tests that we presume are testing… something.
What advice would you have for the original poster?