How does XRAY help you track QA metrics or useful in QA overall?

I have added XRAY addon to JIRA and created test cases via XRAY. I attach all the applicable test cases to every story I test. I want to be able to track # of test cases per sprint.
It’s been done at another company I worked for and I want to bring the same format to the new company. Curious to know how others use XRAY.

1 Like

I’m curious, suppose you do get set up to track number of test cases per sprint. What will you use that information for? How is it useful? What does “improving” on such a metric look like? How could such a metric be gamed or abused?

My group does happen to use XRAY, but we’re far less focused on test cases than on doing good testing–testing is not test cases.

1 Like

Thanks Caleb. I do agree with you that the work has to be focused on doing good testing , rather than # of test cases. Curious what do you use XRAY testcases for?
What kind of metrics do you use? Every company is different and I want to bring the best practice to my company.

1 Like

We use XRAY for our requirements-based testing, in conjunction with R4J (requirements for Jira) so we’re tracing to requirements rather than stories. But for additional testing that we do, I prefer something lighter weight, either “for the purpose” testing that’s performed in connection with a particular story but doesn’t necessarily need to be codified and executed repeatedly in the future, or things like test charters. We also maintain a fair number of automated checks.

As for metrics, I’m not a huge fan of them. For some larger projects I’ve been on, one that’s been somewhat useful is tracking our burndown of tracing “verification” tests to requirements (this is in the context of an FDA-regulated company and medical device software), mostly as an indicator of when we’d be “ready” to enter formal verification, but I suspect that wouldn’t be useful in a lot of other contexts.

Other things I’ve tried in terms of communicating “how is the testing going?” are the low tech testing dashboard and product coverage outlines. What I like about both of these is they go beyond just giving a number for “we created X test cases” or “we tested Y features” to tell more of a story of perceived risk, uncertainty, depth of coverage, and our own confidence in our testing and our opinion on the likelihood of additional bugs in a given area.

I also can’t pass up the opportunity to refer to Isabel Evans’s wonderful lightning talk on “How many tests?” when discussing this topic.

Also, here’s the Q&A thread from a panel discussion I was fortunate enough to participate in at a past TestBash that might be helpful: Panel Discussion: The future of test cases.

3 Likes

Thank you @c32hedge for this helpful material.