We use XRAY for our requirements-based testing, in conjunction with R4J (requirements for Jira) so we’re tracing to requirements rather than stories. But for additional testing that we do, I prefer something lighter weight, either “for the purpose” testing that’s performed in connection with a particular story but doesn’t necessarily need to be codified and executed repeatedly in the future, or things like test charters. We also maintain a fair number of automated checks.
As for metrics, I’m not a huge fan of them. For some larger projects I’ve been on, one that’s been somewhat useful is tracking our burndown of tracing “verification” tests to requirements (this is in the context of an FDA-regulated company and medical device software), mostly as an indicator of when we’d be “ready” to enter formal verification, but I suspect that wouldn’t be useful in a lot of other contexts.
Other things I’ve tried in terms of communicating “how is the testing going?” are the low tech testing dashboard and product coverage outlines. What I like about both of these is they go beyond just giving a number for “we created X test cases” or “we tested Y features” to tell more of a story of perceived risk, uncertainty, depth of coverage, and our own confidence in our testing and our opinion on the likelihood of additional bugs in a given area.
I also can’t pass up the opportunity to refer to Isabel Evans’s wonderful lightning talk on “How many tests?” when discussing this topic.
Also, here’s the Q&A thread from a panel discussion I was fortunate enough to participate in at a past TestBash that might be helpful: Panel Discussion: The future of test cases.