About the test coverage

We written the selenium automation for our web application . If we want to get the test coverage for automation which tool we can use. this tool has to be check the source code while running our automation and identify methods which are not executed.
Any suggestion on this.

4 Likes

I remember at my previous company developers used JaCoCo, its Java-based tool for measuring code coverage, which works pretty well with Maven and can generate visually nice-looking HTML reports.

@mirza yes we tried with that . Hope we have to add this to our web application then we have to run our automation . is it ?

This was to add run time not the building the project

1 Like

Can you please provide a bit more context?

  • Which programming language is the web application written in?
  • Which code coverage are you interested in? The coverage of your javascript code running in the browser? The coverage of your code that creates the web UI?
    If you’re interested in the coverage of the code which creates the web UI I would assume that the tools which are used to measure the test coverage of your unit tests can be used to measure the test coverage of your selenium tests as well.
2 Likes

Our web application written in kotlin and that was run as container . We automate the functional flow using selenium with Java . Both are two different repo .

When we run the automation against our web application we have to find the test coverage of the web application

In that case I agree with what @mirza said: According to a quick Google search JaCoCo should be able to measure the coverage of Kotlin projects.
I suggest looking into the “agent” mechanism of Jacoco (JaCoCo - Java Agent), that should allow you to start your web application in a way that code coverage is measured right away. Then you can run your tests, stop the web application (which causes the coverage file to be written) and parse it afterwards. The one difficulty I can see is that you have to make sure that the coverage file is written to a location outside the container so that you can access it after the container shutdown.

1 Like

Yes we couldn’t find the proper documentation or guidelines for JaCoCo anyway we are trying this options

1 Like

You could maybe misuse Mutation testing for this?
I think @louisegibbs did a talk about this.

I can’t find the presentation itself or the video, @louisegibbs help? :stuck_out_tongue:

2 Likes

Thanks for your information nope we have the find the test coverage based on our automated test cases

Test coverage = Number of test cases automated / Number of possible test cases *100

I have no idea how measuring the amount of product code that’s covered by your selenium tests will help you to answer that question.
Are you sure that this is really what you have to measure?

As pretty much all testing literature will tell you, “Number of possible test cases” is infinite. :face_with_raised_eyebrow:

I’m not an expert in containers or JaCoCo but if you can provide a bit more information about where you’re stuck I’d be happy to try to help.

1 Like

That would be the percentage of explicit test cases you’ve turned into a pseudo-replica using a tool, it won’t represent coverage in a meaningful way, either in terms of actual test coverage, or in terms of a percentage of cases that you’ve automated.

Mr Bolton put this better than I could:

In particular, “percentage of test cases that have been automated” is a seriously empty kind of measurement. A “test case that has been automated” could refer to a single function call and a single assertion, or to dozens of function calls with thousands of assertions. An “automated test case” could refer to a simple unit check or a set of checks in a set of complex transactions representing some workflow. “83% of our examples are being checked by machinery; the rest are being done manually” begs all kinds of questions about the what’s in the examples. It also ignores the fact that human interaction with and observation of the product is profoundly different from an automated check of some output.
(Talking About Coverage – DevelopSense)

2 Likes

What is this? and where is the * 100 coming from?

Yes number of possible test case can be infinity but if we can at-least capture our web application source code was covered/executed when executing our automation ,from that we can derive out automation touch all possible method in source code.

100 was define for getting percentage.

We have to instrument our web application then we have to run the automation. When we are doing instrumental we have stuck on it.

Yes accepted this , how we can measure this automation progress ?

In my opinion,
I’m not sure how you treat Code coverage and Test coverage as same in this context (for me I treat this two different)

Code coverage : It’s come from unit tests that access implemented function. If test execute function the coverage will count. Compare this like you put water to the pipe that have multiple branch and see how water can access. (Tool e.g. Jest, Istanbul or Sonarqube do this)
Even code coverage is 100% it not guarantee that it’s bug free. (and there are people mention to do Mutation testing as well if you want to make sure that test is worked in some inputs)
Ref : Google Testing Blog: Code Coverage Best Practices

Test coverage : It’s base on business requirement + QA design test and how many test is conducted.

I hope it help

2 Likes

So we don’t have the tool for capture that

In an exacting or useful way, you cannot. Testing cannot be automated, it’s a fundamental philosophical principle of epistemology. A responsible tester should understand the nature and limitations of their tools, and there’s a very long conversation to be had about ways to help that for automation tools, like including description of purpose, having good naming conventions, including review dates for maintenance and so on. Automation will be part of an overall coverage solution, which in turn will be based on numerous contextual factors of your business, employees, clients, tech stack, in-house knowledge, known risk factors, areas of software complexity and so on, and the coverage will include checking tools like automation as part of an overall picture of coverage with respect to some models that fit into that usage.

As it’s fundamentally impossible to translate a human-powered written check (test case) into a code-powered check (automation) without loss, you’d have to go back to the concept of coverage from scratch and determine what your testing is actually doing and how you cover the holes.

It’s also incorrect to say that coverage has a percentage, except in very particular circumstances (and usually not valuable ones). Coverage must be with respect to some model of the system whose factors multiply up to infinity, and then humans have to make the decision about which of those factors are important based on their understanding of contextual information that determines their value. Testing then becomes a sampling and decision process, not one that can achieve 100% in principle.

One way to consider coverage might be to look at what you presume your original written cases are doing and automate out the checks and assign good testers to probe for the unwritten test cases that constantly move with code changes, environmental factors, product risks and all the other stuff that makes software development so nimble and exciting. It’s a convenient lie to say that written checks provide worthwhile coverage alone, and another to say they’re fungible to automation checks as if they can be converted without loss. However you could deformalise your cases into less brittle forms, leverage the skills of your testers, and in turn leverage your automation to perform the checks you’ve determined are suitable based on the needs of the testing strategy - repetitive, rarely changing, specific, highly mathematical, brute forcing, the usual candidates.

Coverage is a large and complex topic, but I hope I’ve highlighted some of the issues in a useful way.

3 Likes