QA Statistics to Show QA is doing its job

I need a way to get good statistic that our testing is doing a goos job. I have information on how many test cases we have but how do I show the upper management they are good quality. How can I show in stats how solid our regression test bed is solid.

Thanks
Thomas

8 Likes

Some notes:

  • You could mark which tests are automated and which are done manual. (and on what level eg: unit, api , ui)
  • You can mark which test cases have caught bugs / defects (how often?)
  • How many bugs are discovered in which stage of your SDLC (in analysis phase, dev, test, business acceptance or even in prod?)
  • You can maybe link test cases to acceptance criteria
  • Show them Test coverage
  • Prioritization of your test cases
  • How they are categorized and who tests what (functional, non functional, ā€¦)
  • Perhaps you have dashboards in your management system, maybe there is something there that you can show also?

You donā€™t always only have to specify the test cases but you can talk/show the management what other processes you exercise to get those test cases. For example ā€œdoing peer reviews of test casesā€

You can use several test cases in an exploratory test session.

4 Likes
  • Maintaining the Dashboards help a lot in test management, especially when they reflect the good statistics. Jira Xray plugin and Zephyr are good examples for the management systems

  • Test execution results showing the health of build also play a very vital role in showing performance of tester

1 Like

I personally think that reporting on test cases is not valuable to any manager. Metrics like number of test cases, how many are ā€œautomatedā€, pass/fail ratios, etc. are meaningless if you really think about it.

Hereā€™s an example: I have 180 test cases, 90 are automated, 110 passed, 30 failed and we ā€œranā€ 140 out of 180. What does this really tell you? The answer is not muchā€¦
You have no idea if the product is shippable or not, you do not know what, how and why I am testing, you have no idea how solid that automation is, you have no idea about how long itā€™s going to take to finish testing and you have no idea about potential risks.

Instead, I focus on telling management a story, my story of how and why I test and my assessment of the product. When I design tests and want to justify why Iā€™m picking some tests over others, I tie them to risks.
Of course, checking adherence to requirements is also important, but requirements are not the only oracles. I like to present management with a list of oracles that let me know how they help me.
I also talk about coverage - requirements, product, risk, etc.
All of this helps my managers understand how good my testing is and what is the status of the product.

Please see the articles below for details on the alternative, theyā€™ve helped me a lot.
Blog: How is the testing going?
Article related to the ā€˜Testing Storyā€™

I also suggest reading ā€œBreaking the Test Case Addictionā€ in its entirety. The second blog post above is part 10 of the series.

Hope this helps!

13 Likes

Iā€™m liking what @alexm has said above, to tell a story, using a list of oracles.
But also to pick up on using Jira to point out when a QA process improvement has saved us shipping a bug or being unable to recover quickly.

Here is a radical thought Iā€™m just writing this on a very large napkin quickly:

Not everyone is big enough to have a bug management tool and test management tools that let you drill misleading stats out to any volume that can start to tell an unbiased story. You cannot tell quality based on number of jira tickets by severity, because bug severity is highly subjective and impacted by the environment. The true story without bias of what quality looks like is what the product health is like in the wild. Basically its called CRUD (Customer reported unique defects), a measure of the number of bugs QA did not find. Any bugs customers find that were in the bug system before release ā€œcanā€ get subtracted from the CRUD number if you like. But failure to try and measure, is a failure to try and record the value QA deliver. Historically the only way to gauge quality in an unbiased is to look at how many hotfixes go out per release. If you only release 4 times a year, you now know what you need to work towards. Maybe release frequency is the only metric?

Hi @tfritz1325

Look into the goals of upper management, and align your information accordingly. Is there a deadline? Is there a fixed scope - or a fixed budget?

Be carefull about putting to much on the numbers. Itā€™s a trap that it makes testing more controlable by management. Would you manage devs and the rest of the project with similar stats?

Perhaps look into a team goal for the joint delivery would be preferableā€¦

3 Likes

Hi @tfritz1325 ,

Great question, thanks for sharing.

Is the following thread on your radar? Ask Jenny a Question About: The Only Good Quality Metric is Morale

There are so many excellent questions asked which I feel are relevant to your question here. And @jennydoesthings provides incredible insight with her answers ā€“ her knowledge and experience is fantastic.

Good luck.

4 Likes

There isnā€™t an easy answer. Quality is context dependant so what is important to your organisation.

So if you are a commerce site you might have an SLO around transactions. We have X transactions a month, 1% fail because of bugs or down time.

We also e looking at Dora metrics?

  • Lead times
  • Mean time to recover
  • Release frequency
  • Change fail rates

Because their is evidence that improving those improves quality.

I also want automated tests to be reliable to I measure that. We might start collecting test coverage as informational but I donā€™t like target. And counting tests has been shown to be bad.

Other important things are incident rates and impact. Customer impacting bugs particular high severity bugs.

I wouldnā€™t try and measure test quality but if you wanted to, one technique is to track bugs found at each stage of SDLC. You want to show bugs are found early and not but customers.

Lastly - customer feedback

3 Likes

How large is your company?

Mostly Iā€™ve worked for startups, both early and late stage. My current employer was successful and reached a couple hundred engineers, 20 testers before been acquired we are still working on Integration with wider business.

That shouldnā€™t matter though, Iā€™d use the same metrics for any size. Actually getting them is kinda easier in small companies which are more dynamic.

When you are large things move more slowly, but you can still get them done if you know the data your need and talk to the right people.

I was asking the original poster, but I didnā€™t make that clear. Still, it is interesting to hear about your company, and interesting to hear your thoughts about measuring the effectiveness of testing and reporting on it!

1 Like

We are a startup and we have about 100 employees now and expending. I am also now capable of expanding my QA department. I have 5 QA Engineers and I am the Hands On Director.

3 Likes

Thomas,

If youā€™re presenting at a meeting, I suggest looking at this as an opportunity to tell upper management the most important things they need to know. Theyā€™re likely to be busy and unfamiliar with your organization. Here are some ideas.

  • What your group does and why itā€™s important. Generally how your group is handling its responsibilities and developing skills the startup needs.
  • One or two important things needed from management. Possible examples: (1) Tools or training and why. (2) Although you have the capability to expand your group, you probably need management approval at some level. If staffing is okay, you donā€™t need to bring this up. If there are unmet needs, management (starting at lower levels) should know. Keep in mind longer hours or temporary staff can help in a crunch.

Try to keep everything as simple and clear as possible, with no more than two or three main points for management to take away. Personally, I like statistics, but I would minimize their use. Stories can work better, with background information, problem statements, solutions, etc.

Last (and very important) donā€™t let what you say surprise your manager.

George

1 Like

@tfritz1325 That is an exciting scenario, in which you have the opportunity to set the team culture for the future in a positive way! Iā€™m the QA lead in a smaller company which is technically a startup, but we are very stable and have been steadily expanding for years. We may be in your situation within the next year or two. So my advice may or may not be helpful, but just in case:

I also work in Customer Support, handling escalations and interacting with clients who are dealing with any out-of-the-ordinary issues. In that side of my role, it is important to manage expectations. I have to diplomatically explain that they may not be able to get what they need as quickly as they want it, or that what they want isnā€™t actually going to help them achieve their long-term goals the way they seem to think it will. It requires forethought and tact, and a sincere desire to give the client the best service possible while working with the resources at my disposal.

In the past year, Iā€™ve applied this skill of managing expectations to the way I communicate with management about QA effectiveness. A year ago, I would often get the question, ā€œWhat is our coverage?ā€ right before a release. To the asker, it seemed like a meaningful question, but given that we werenā€™t automating anything at the time, and given the nature of our platform, it wasnā€™t the right question to ask. I couldnā€™t give them a number that would really mean anything useful, and it would have been a waste of time and effort to try.

So I started talking about risk areas instead. At the beginning of a sprint, I defined the highest risk areas in the platform, asking the developers to confirm what they were and what the risks were in each area. When I was in a team meeting with management, Iā€™d start by explaining that we would be focusing our testing efforts on those risk areas, and that if areas X, Y, and Z were thoroughly tested, we would be confident the release was ready. Then during the week(s) of testing, I would report each day on what we had accomplished in each risk area.

I havenā€™t heard that unhelpful ā€œcoverageā€ question since, and management is asking me more helpful questions now, which actually help me to improve the answers that I can give them.

3 Likes

Love this! Thanks for sharing, @debco :smiley:

4 Likes

This thread popped up in my company community and itā€™s an interesting subject, so hereā€™s my view on it.

I found that focusing on the risks is a great way to communicate the business value of testing.
How you define this risk depends on your product and the industry youā€™re in.

We come from a heavily regulated industry (gambling and gaming) so for us, there is no greater risk than a customer journey not working, or the system not being compliant with regulations. Building a non-performant or non-scalable system is a risk too, which is why we do performance and load testing.

The primary goal of Quality Assurance Engineers in our company is to expose this data to the teams and product stakeholders.

Reporting on the above tends to be tricky because different people have different ways of obtaining the results. One team may have 5 scenarios with 10 checks each while the other may have had 50 scenarios for the exact same thing.

In the end, our reporting template consists of several sections:

  1. Information - communicate whatever concerns you may have.
  2. Blockers - communicate where the problems are.
  3. Suites - provide visibility into details and evidence to your claims.
  4. Scenarios - provide the confidence level you have to release and back it up with some numbers.

While the granularity in these approaches is different, the outcome is the same - Tests are/arenā€™t passing. Something we think is high risk is/isnā€™t broken. The confidence level to release is reflecting this state.

When a stakeholder reads this report, they understand:

  1. What the risks of releasing are.
  2. What is the confidence level of the team doing the release.
  3. What was the approach used to assess this risk.

To wrap it all up, if your regression tests focus on clearly identified risks and you have good coverage of these risks (e.g. all customer journeys) I would call your testbed solid.

How do you know stats-wise youā€™re maintaining or upgrading your quality of these? Well, are you focusing on more risks over time? Are you doing it faster? Is your coverage percentage trending up? Are your tests finding issues? Those are telltale signs Iā€™d say.

7 Likes

Great to read this @zeeax. Welcome to the community! I hope you enjoy sharing more of your experiences here. And please feel free to ask any question to start even more interesting conversations.

2 Likes

We track a few metrics that we review weekly to look for trends.

  • Defect detection - Production bugs vs pre-prob bugs found
  • Production bugs created per point delivered
  • Bug bounce-back between QA and Dev for pre-prod bugs

If we spot spikes / abnormalities in stats we dive a bit deeper and look for actions we can take to improve.

2 Likes

Hi Alex, great comment.

It would be interesting to know if youā€™ve always reported in this way or have had to ā€œtrainā€ management to let go of their metrics. I havenā€™t always worked in places where telling a story about the testing would be accepted over cold hard numbers (well not initially at least).

S.

4 Likes