Automation - demonstrating functional coverage


When it comes to your automation how are you demonstrating the functional coverage of the application (perhaps in terms of planned regression tests, or the areas of the application tested to a degree by the automation). For example, I use TFS for our test suite repository and ensure that each automated test is in a Test Suite first (for a few reasons, but one is demonstration, and also so that someone could run manually if they wished to).

If you don’t worry about demonstrating this, what is the reason?

Demonstrating coverage is a tricky thing. I’m not familiar with “functional coverage” but to me the two main approaches seems to be to do some kind of requirements coverage by saying something like: “This requirement have at least one test on it” or code coverage which is typically saying: “This like of code have been executed during at least one test.”.

Let’s start with what I think you want to say in most cases and why these two don’t say that. Normally you would like to say “Everything that is important enough have been covered sufficiently with tests.”. Code coverage both fails on what is important enough since you typically do not have any means to express the importance of a line of code, and on sufficiently since you do not say what kind of data variations that are meaningful. Requirement coverage at least have the importance, but the sufficient is normally missing. If you have both then you have a chance to express the statement. (I’ve never worked at a place that had both).

After that you have an aggregation problem when reporting / demonstrating. For instance if you report 50%, you have aggregated all that juicy information into a number that prevents you from making the statement.

Regarding when you do not have to demonstrate it for me the main reason typically lies in the applicability of the information provided. For instance, if you provide the number and no one looks at it or looks at it but does not take any actions the information is typically useless. Another reason is that you typically will get the information after testing anyway, as in bugs reported in production. That will at least tell you that there are some things the testing does not cover. If those are severe enough your coverage is to low. The last note is that I have been at places which value these thing very highly, but all I’ve seen them used for is to cover your ass not to help increase quality. As in, it’s not my fault because my coverage is 80% and that was the specified rule.

So what can you do about it. The most useful information provided in my experience is when you and your stakeholder can come up with some kind of actions that you may want to take. Like we need to test more variants of the deposit money function. Then you can say today we have 2 variants on that let’s increase that to 5. Thus increasing the coverage from 2 to 5.

Good luck.

Thanks for reply. Functional coverage may have been poor terminology. To boil this down a bit there are 2 scenarios here (i think) and something in the middle.

  1. I just have automation, and outside of automation team no-one would really know what tests it covers - it runs and we trust it is doing it’s job. Outwardly facing someone couldn’t review and see whether or not they’re happy, they’d need to get into the code and see.
  2. We have a functional map of the application and tests or notes to basically state what is being covered by the automated tests, so something someone like a PO could read and comment on, approve etc. This could also mean some form of requirements coverage as well (I just chose functional areas of the application rather than requirements - e.g. user registration and login is sufficiently covered)
  3. Somewhere in the middle, i.e. tests which can be reviewed out of the code, and also some which aren’t documented but are in the framework.

So, the question is do people generally go with option 1, 2, or something in the middle.

I am currently using option 1, and this is so we can easily review and look for potential gaps in the automation - tests are broken into folders aligning with the different functions, pages etc of the web app and we can easily see whether there are gaps. In this process, no test should be automated without having a reference in the test case repository.

Overall wondering if this is a pretty standard approach (or not) or what works best.

I’m trying to avoid things like the following which are easy traps to fall into.

  1. Automating low value, low importance tests just because we can (especially UI tests which a lot of ours are
  2. Testers saying, were automating xxxx tests without this really meaning anything - just a number doesn’t say anything about the relevance/importance of the tests and therefore the assurance the automated tests can offer.

We use Cucumber for our automated tests , so they’re running off human-readable feature files written using Gherkin syntax. It should then be possible for someone non-technical to view these and get an idea of what the test coverage actually is.

1 Like

Gherkin is a very artificial language. We do not speak this way in real life. I recommend to use either Concordion or Gauge. Use normal English for automation.

The purpose of Gherkin though is that non-technical colleagues can pick them up and read them to understand the coverage. Some of our user story AC’s are written in Gherkin syntax by business analysts.

Personally, I find it clearer than the latter options you have mentioned. :thinking:

Technically speaking, you can’t give a functional coverage, because your software can act in ways that:

1 - You know that you know;
2 - You know that you don’t know;
3 - You don’t know that you know;
4 - You don’t know that you don’t know.

At any given moment, you can only speak of (1) and say talk about unkown risks in (2). (3) and (4) will only be considered after learning - in the future moment.

Tools such as Gherkin or ACs only talk of (1).