Hey guys,
for some days I worked on a report for the management which should present the test coverage of our services. We want to know where we can improve our testing/quality and how good we are now.
I created a report with 3 levels of KPIs:
- KPI Development: regarding to the code coverage of a service. (But mostly it is set to 80% or nothing. )
- KPI QA: how many tests do we have (manually/automatically) and to which feature do they belong
- KPI Jira: how many tickets were resolved in the last month, how many of them are tested and how many bugs a created this month
I think the third kpi is ok: we can see that there might be a big diff or no diff and because of the created bugs we now which test cases to improve or implement.
However, the two other kpis are very disappointing for me.
For the kip development “Yeahi, I see that for some service we don’t have a code coverage” → Ok, we can improve this service.
For the kpi QA, I think we all know that we can’t get 100% test coverage with manual or automated test cases (we try our best but sometimes the conditions are to bad to know everything) and sorting them into sets for different features is really good but depends on whom created the test set. One person like it detailed like “Modul filter a column” and other like it more generic “Modul columns”.
So, long story short:
-
do you have test coverage reports?
-
how do you define them? Also different levels for development and testing teams or only looking on the testing team?
-
do you have any tips I can try?
Hope, you could follow my description
I’m very exited about your replies
Best
Melinda