Test Coverage Metrics?

Hi everyone,
I’m currently the only QA at my company, and management is currently asking for metrics on test coverage. They want to know how much of the product is covered by manual + automated tests, so basically what X% of the product is covered overall, and what X% of tests do we have. They want us to be at 100% test coverage in the end so need to currently figure out where we are right now.

The only thing is I’m not sure what is the best way to work out what this X% would be. I’m interested to see if anyone else has had to provide information like this and how you go about getting these stats, or even deciding at what point 100% has been reached?

3 Likes

Hi Tiem,

Thanks for asking here on The Club.

A couple of talks came to mind.


The Only Good Quality Metric is Morale with Jenny Bramble

Takeaways:

  • Pitfalls of commonly used metrics: no. bugs found, production defects, time to resolution…
  • Morale as a meaningful metric and increasing morale to increase quality.

How (Not) to Measure Quality with Michael Kutz

Takeaways:

  • Identify different approaches on how (not) to measure quality
  • Assess commonly used quality metrics against different purposes
  • Be aware of possible side effects of measurements
  • Understand how metrics can be combined to even out each other’s weaknesses
2 Likes

Hi Tiem,

The first thing to realize is that you do not to be accurate to the nearest % and this sort of question from your employers is a benchmark at this stage and rightly so. If you are incertain then think about the product and think about how confident you are that it is wholly testable at this stage.

Is there a regression test suite that you can use for this and if so does it only contain tests that you can actually do. A good practice that I have invented is to add testcases for parts that cannot be tested currently with an aim to achieving the testing of this, and thus increasing coverage going forward.

So don’t stress, if you think it’s prob 50% then tell them it is %50% but make sure you can meansure it going forward.

Hope this helps

Alex

1 Like

You need to stamp on this immediately. Your management are asking a stupid question to which there is no answer, and frankly they should not be running the business if they are so ill informed. I always refuse to answer such questions, and you should too. Explain your reasons to them so it doesn’t look like you are just being difficult.

People including Cem Kaner, James Bach and Michael Bolton have been writing about this for upwards of 30 years, so there is plenty of material if you look for it. Stay well away from anything written by Capers Jones or ISTQB.

Testing is infinite, so X% is always zero, no matter how much testing you have done, and it’s idiotic to talk in terms of reaching 100%. If your management think that 100% coverage means writing a test case for every documented requirement, they have so much to learn I don’t know where to begin. In any case, the documented requirements are only a tiny fraction of the actual requirements, which can never be fully represented in any form because they too are infinite.

Quality (of which test coverage is only one aspect) is multi-faceted and cannot be reduced to a number. Any number you come up with has no mathematical validity. If you have managers who want to manage by numbers (and I’ve worked for a few, albeit not for long), you need better managers. For your own sanity, do not give in to this nonsense.

1 Like

This can be challenging as you need to get everyone on the same page first.

The easy way is if you have very clear acceptance tests defined, 100 acceptance tests, all pass and viola 100 percent.

In reality this is just the very basics of what your test coverage is going to be, that 100% may actually only be 10% of what you really want your testing to cover.

However as soon as someone talks about percentages this is often what they are looking for but you will need that narrow definition of what 100 percent is from them, its the only way it works. They may even go a step further and suggest its bug free as those tests pass.

Most testers though will be shouting whoohahha at this including myself but for numbers based managers it can work.

You should though let them know it’s downside.

So what is an alternative.
I sometimes use a combination of above, plus feature and risk coverage. Feature can at times be replaced with view or user flow if suitable.

Feature and risk will often require a depth of coverage. For example, basic, medium or deep test coverage of each, some people color code the reports on these so they can move to more qualitive than quantitive in the communication of coverage.

Feature lists are usually straight forward, risks lists take a bit more discussion.

So you may have.
Defined Acceptance tests - 80% automated, 10% hands-on
Feature lists with basic, medium or deep indicators
Risks similarly though often comments make sense here

Example.
Security risks.
5 of the Owasp top ten risks investigated to a reasonable level, recommend a deeper dive into the top two risks.
Introduced automated system monitoring of two of the risks.

Accessibility risk. Coverage limited to chrome addon tool coverage - recommended for automation.

You can see how that goes from quantitative basics to more qualitive the deeper and more valuable your testing goes.

Find out what they are looking for, if its the basics then documented acceptance tests can simplify that but make sure they are absolutely aware that is just the basics and if they want deeper coverage then 100% will not exist but you can have reasonable indicators of coverage.

I suggest that you try to have a conversation with management about this. Why do they want to measure test coverage? Why do they want a metric to have a particular value? If you dig deep enough, they probably don’t care about test coverage percentages, but instead care about other things, and they think that test coverage percentage is a good way to get that. It’s probably not, but they don’t realise it now and you might be able to help with that.

They might care about customer happiness, speed of delivery or something else. If it’s customer happiness, then they need to realise that not every line of code / function / code module contributes the same amount to customer happiness, or puts that happiness at risk by the same amount.

I assume that your system has some kind of login stage. Imagine how unhappy your customers would be if they couldn’t log in, or if someone else could log in to their account. Compare this to the unhappiness from e.g. the recommendation engine not working in an online shop or something like Netflix.

I realise that this is: a) a fair amount of work for you and management, b) possibly tricky as it could be interpreted as being uncooperative when actually you’re trying to be more helpful, but I suggest that you try to get from management / product management a summary of what they think are the most valuable parts of the system, and which parts of the system put customer happiness at risk. Concentrate on those with testing.

Having 100% coverage of the recommendation engine but 0% coverage of security might seem like you have 50% coverage overall, but in terms of customer value/risk I would put the percentage much lower than 50%.

Easy: quit and find out a new company to work

Educate your company members / managers about metrics.

I read this paper on metrics by Dr. Cem Kaner and it is a good reference point. Check this out:

1 Like

there are three levels of test coverage.

first one, white box coverage, which is the line coverage with unit test.

second one, gray box coverage, which refers to API test coverage = the number of APIs to be tested / total number of APIs in system.

third one, black box coverage, which is requirement test coverage = the number of requirements / total number of requirements, the requirements also could be features, stories.