Test Coverage Report: curse or blessing?

Hey guys,

for some days I worked on a report for the management which should present the test coverage of our services. We want to know where we can improve our testing/quality and how good we are now.
I created a report with 3 levels of KPIs:

  1. KPI Development: regarding to the code coverage of a service. (But mostly it is set to 80% or nothing. )
  2. KPI QA: how many tests do we have (manually/automatically) and to which feature do they belong
  3. KPI Jira: how many tickets were resolved in the last month, how many of them are tested and how many bugs a created this month

I think the third kpi is ok: we can see that there might be a big diff or no diff and because of the created bugs we now which test cases to improve or implement.

However, the two other kpis are very disappointing for me.
For the kip development “Yeahi, I see that for some service we don’t have a code coverage” → Ok, we can improve this service.
For the kpi QA, I think we all know that we can’t get 100% test coverage with manual or automated test cases (we try our best but sometimes the conditions are to bad to know everything) and sorting them into sets for different features is really good but depends on whom created the test set. One person like it detailed like “Modul filter a column” and other like it more generic “Modul columns”.

So, long story short:

  • do you have test coverage reports?

  • how do you define them? Also different levels for development and testing teams or only looking on the testing team?

  • do you have any tips I can try?

Hope, you could follow my description :sweat_smile:
I’m very exited about your replies :grin:

Best
Melinda

3 Likes

KPI are always interesting to talk about. To me, the main problem with KPIs in general are that you are often trying to measure something complex in a simple way, and those measurements seldom give the complete picture. And this in turn easily drive unwanted behaviours.

When it comes to Test Coverage Reports the first question to ask is why someone needs that report. Is it to know how much information we have about the quality of the product at a given time?

If it is about how much information we have about the quality of the product at different stages during the development life cycle, then I prefer to look at it as “Quality Information Coverage” instead of “Test Coverage”. I gathered my thoughts in a document:

Of course this is just one way to look at it, but I have found it helpful.

Best regards,

Johan

3 Likes

Hey Johan,

interessting article, this gives a new point of view about quality measruments. I will have a look how can I setup this in my company/team.
Thanks for the inspiration. :+1:

Best
Melinda

2 Likes

Run away if you have a QA leader who tracks the numbers of bugs open by a QA individually.

I track by Feature Coverage from the team.

2 Likes

Code coverage is a curse. IMHO, it’s the star sign of the testing world - it doesn’t exactly hurt to know it, but if you’re using it to make decisions then you’re making bad decisions.

The one KPI I would like with automated tests is % of false positives vs. false negatives. This would require collecting data on every test run, which nobody really wants to do.

1 Like