Help Required: visualise overall test coverage and establish Traceability

Context: For a project with BDD Selenium Framework and User stories in the JIRA. How to gain confidence in testing by having an effective test coverage and requirement traceability

Requirement :

  1. We need to visualise overall test coverage i.e. overall test scenarios/test cases across all the areas in the project
  2. We need to have confidence in our testing so need to visualise completeness of test coverage by having an association between requirement and its test scenarios

Help Required
Question1 : what tool and process can be used to visualise test coverage
Question 2: what tool and process can be used to establish traceability

2 Likes

Hello @asha and welcome to MoT.

To address your requirements for visualizing test coverage and establishing traceability between requirements and test scenarios, you can use various tools and processes. Here are some suggestions:

Question 1: Visualizing Test Coverage

Tool: Test Management Tools
Process: Use a test management tool such as TestRail, Zephyr, or Xray. These tools allow you to create test plans, organize test cases, and track test execution. They also provide features to visualize test coverage across different areas of your project.

Process: BDD Cucumber Reports
If you’re using a BDD framework like Cucumber, you can generate reports that provide insights into test coverage. These reports often include visual representations such as pie charts or graphs showing the percentage of scenarios covered.

Process: Code Coverage Tools
Integrate code coverage tools like JaCoCo (for Java), Istanbul (for JavaScript), or coverage.py (for Python) into your automation framework. These tools analyze which parts of your code are exercised by your tests, giving you an indication of your test coverage at the code level.

Question 2: Establishing Traceability

Tool: Requirements Management Tools
Use requirements management tools like Jama Connect, IBM Engineering Requirements Management DOORS, or Confluence (with appropriate plugins) to manage and track your requirements. These tools often provide features to establish traceability between requirements and test cases.

Tool: JIRA Integration
If you’re already using JIRA for managing user stories, ensure that your test management tool integrates with JIRA. This integration allows you to link test cases directly to user stories or requirements in JIRA, establishing traceability between them.

Process: Test Case ID Convention
Establish a consistent naming convention for your test cases, including the user story or requirement ID in the test case name. This makes it easy to trace back from a test case to its associated requirement, even if you’re not using a dedicated tool for requirements management.

Process: Requirement Coverage Matrix
Create a requirement coverage matrix that maps each requirement to the corresponding test cases. This matrix can be a simple spreadsheet or a more sophisticated document generated by your test management tool.

By combining appropriate tools and processes, you can effectively visualize test coverage and establish traceability between requirements and test scenarios, thereby increasing confidence in your testing efforts.


For transparency, the above answers have been provided with ChatGPT 3.5. But I used in earlier projects a mixture of the given answers. And the better you dig into the tools and plugins within Jira (Confluence) and Xray/Zephyr the more you get an idea what could help you.
In fact, before you use this with other testers, explore this in a sprint to become a matrix or valuable output for a tool where you all can benefit from.

Hope I (or AI) could help. :wink:

2 Likes

hi @alexschnapper - I really appreciate your response on the asks, it helps :). With multiple Git repositories, each of which has multiple business areas further divided into multiple features files, using BDD alone can be tricky for visualizing overall test coverage.

I wonder if the combination of Xray, Selenium/Playwright BDD framework, and Jira user stories would be a good option/approach to build test coverage insights?

i.e. BDD feature scenarios update Xray test results. In turn, these Xray tests, linked to Jira user stories, provide overall test coverage.

Let’s say you have 10 feature files, and each feature file has an average of 5 scenarios. When these 50 scenarios are executed, their statuses (Pass/Fail/Blocked, etc.) should be updated in Xray. Subsequently, Xray reports can provide an overall test coverage view across multiple areas, along with traceability.

1 Like

Do you want to visualize test coverage, or do you aim to establish a robust test coverage that aligns with your product and quality standards? Is your goal to gain confidence in your test coverage == knowing that you’ve sufficiently covered both documented and undocumented features, alongside both code and UI with tests? Are you convinced that visualization alone can ensure the completeness of test coverage by linking requirements to test scenarios comprehensively?

Do you possess complete, detailed, and relevant requirements? Relying on associations with Jira stories might not secure you comprehensive test coverage (if this is what you’re aiming for). If you’re seeking just a report to demonstrate that all existing requirements (there are often more features and functions to test than there are requirements and Jira stories) are associated with some test cases that kinda cover the requirements, then this question isn’t significantly important in the context QA.

How well does your test coverage integrate with your dev process? Many teams often adapt and evolve requirements in rapid cycles. Is your approach to test coverage good for this, ensuring that as new features and changes are implemented, they are also covered by your testing strategies?

Have you considered non-functional aspects such as performance, security, and usability in your test coverage?

How do you ensure that the metrics used reflect the quality of testing? Metrics such as code coverage can sometimes be misleading if not paired with qualitative assessments of test effectiveness.

How much value does your QA process place on exploratory testing?


Sorry I didn’t answer your questions, but instead asked more. It’s just food for thought, maybe you need to shift your focus to broader matters. And sorry for this theoretical flood of ideas; you might not find it relevant since you’re looking to solve a specific task :slight_smile:

Alex (and the robot together) have actually answered the question correctly in my opinion @asha . Because when I read your question you have used the word “need” which in my mind signals that upper management need “you” to do a thing. Alex correctly answers their requirements (because chatGPT is obviously right too) ; buy a tool and deploy it and pay a person to gather the data and do the analysis. Traceability is a security concern really, and for that half of the nut, test coverage is very good thing to have but not the only tool. You need some static analysis tooling, and focus on security testing before trying to measure coverage. Keen to know what the security ask here is though. Does traceability mean things like do we use the XZ library at all?

Konstantin is going the zero budget route, which I am a big fan of, but is still my plan-b. Management are probably not going to want to spend more money and pay for Xray or something more, when cucumber is already a big time and cost centre to them. But they might have to. I’m seeing your core issue being multiple repos and probably multiple owners and one coverage analysis tool not coping with the complexity, which is your real test nightmare. Having a good talk to all the repo owners is probably the longer term sustainable route to getting confidence. Test coverage is merely a metric, confidence of release is the goal, not coverage. Customers do not give 2 figs what your test coverage is, and nor do your bosses, but chatGPT may have told them coverage is a goal, while the goal IMHO is really release cadence and how many patches or hotfixes you do.

Good versus failed releases indicate the number of actual customer impacting defect escapes, while speed of release indicates agility. Agility is what matters to a non-legacy product, not code coverage. Test coverage is irrelevant when your product is low performing or cannot respond to environmental changes.

I would focus on what the customer consumes, unless you are working in a regulated industry, in which case, just buy yet another tool. This is probably a long winded answer, but I personally have low faith in coverage, why? We know that 80% of customers only hit 20% of the features and thus care about less than half of your lines of code that a coverage tool will tell you are all equally important lines. What’s going on there? And to boot, any analysis tool will still ignore security entirely, which is why faster good releases must remain the goal for any product team, and there is no good way to visualise speed.

Oh, and welcome to the MOT community, I almost forgot to say that, mainly because you asked such a good and engaging question I though you have got to be one of the regulars. I really hope the spread of ideas and opinions are something you can use, and hope to hear more from you.

1 Like

Thank you for these thought-provoking questions! You’re absolutely right that visualization is just one piece of the puzzle when it comes to achieving robust test coverage. While visualization can be helpful, we recognize other elements of QA are equally important i.e. release methodology , QA processes, type of testing (functional and non functional), tracebility, coverage(code+test) etc.

Additionally, here are some specific questions :

  • Do you have any recommendations for test management tools that integrate well with BDD framework and Jira?
  • Are there any specific metrics or reporting tools that can provide a more holistic view of test coverage beyond just code coverage?

Thank you Conrad for the welcome and investing time to review and share further insights. What do you by Requirement Traceability is a security concern ?

Well I was thinking more about Traceability in general than just requirements, traceability of where code comes from, how regularly updates and patches are applied and are the sources trustable. Linking requirements to test cases is a fun yet pointless exercise mired in how requirements are tied to releases, are incremental and never map onto architecture, components nor to whatever the tests use to model or structure themselves.

What might you hope to improve quality wise by logging and tracking requirements over time? That is an interesting question, but it’s still probably working with an unquantified or un-measurable thing. We could re-frame “requirements” into the more static things like business constraints though, because they are often very similar. As a tester I’m always more keen to know more about your business constraints than your requirements. As in how well does your test coverage address product and business constraints.

2 Likes