Hi everyone,
Does anyone here have benchmarks or industry standards for QA KPIs? Iām looking for references to better align our QA reporting practices. Any insights or resources would be greatly appreciated.
Thank you!
Can you expand a little on why you want these, what problem you are looking for them to solve etc.
Taking KPIās blindly can be very harmful for teams.
Some teams purely use them for salary reviews for example, to score people and divide a budget between team members. This itself can also be harmful.
Other teams will use them to help teams improve, discuss with team areas they feel they could do better in, decide some good ways of measuring if they have improved including establishing where they are now and putting in a supporting action plan to achieve that improvement.
I recommend against any industry standard oneās as they often miss out on the action plan for change.
Have a think about why you want to use them and Iām sure others will be able to give you more valuable feedback
KPIās are an abstraction invented by lazy managers who do not know anything about either their staff, or their product. Like any abstraction they are merely a model and are almost never good models. The only KPI that counts for quality is CRUDS.
What is a CRUD? Well itās a measure decided by your actual customers about how buggy your product is, not a made up number in Jira about defect density, nothing to do with tech debt sadly, but raw customer feedback covering all quality aspects: security, UX, reliability and suitability. So what is a CRUD you may ask, itās a Customer Raised Unique Defect. You support desk will be tracking these, and the rate at which and severity of each is the only true benchmark of quality - sure marketability and sales figures also count, a lot. But QA should not get too tied up in that world. Each customer defect is triaged to see if it is the same or a duplicate, we only want to count uniques. Each one that customers find before QA find will count double. IMHO this beats the devops metrics and things like DORA, SPACE and GSM for QA, but is admittedly harder work. There is no perfect model, and changing model every year is probably the best thing you could do.
Hello,
You can check out Industry standards like GSM, DORA or others guidelines, but KPIs needs to be individually selected and/or updated based on your team(s) process.
When selecting KPIs, you need to be careful on how this will affect the team practices. They could start focusing and improving only these KPIs, sometime unconsciously, with reduce performance in other practices (gamification).
For example, focusing only on bugs found in production, testers will start taking more and more time in regression saying the risk is too high and postponing release in production more and more often. Doesnāt mean bugs found in production should not be a KPI, it must! You also need other KPIs to monitor test efficiency and code quality.
Technical debt is one good KPIs as this impact your customer. If you can maintain your tech debt at zero, this is incredible but not every company or product have this capability. Typically, Critical/Major bugs are fixed before going in production, but if you release with an increase of Medium/Minor bugs, your product will look ābadā and customer will not necessarily reports these Medium bugs. If you have a big number of them, customers will see it, will not appreciate it, might report an overall sluggish/glitchy impression. Eventually not working with you anymore and not using your product in the future.
SLO, or how fast you fix bugs found in production is also a good KPIs. If your code is of bad quality (hard to troubleshoot, no logs, hard to understand by other developers and to maintainā¦), fixing bugs can take weeks if not months. This force developers to follow good coding practices, and help testers troubleshoot. Again be careful; Iāve seen team with excellent SLOs that were reducing their testing and allowing more bugs in production saying they could fix them realy quickly. At some point, they were spending more time fixing bugs than creating value with new featureā¦ not good!
You can also try different KPIs, even if they might not make much sense. We recently tried to measure how fast Critical and Major bugs are found before pushing in production. The idea was to push teams to better assess high risk areas and focus testing earlier (shift left) on these area before production. It did help, but we have not yet measured the impact on test efficiency and overall bugs found in production.
Continue reading on the subject and good luck in finding KPIs that fits your team(s).
Welcome to the Ministry of Test club Andre. You have aced it in that first reply by being very specific about good metrics. Do hope you find you can keep contributing with answers that hit the mark like that.
C
Hi @cloudiehg ,
In my previous company, we had a set of KPIs to guide and evaluate our QA processes and teamwork quality, though they werenāt always implemented strictly. These could serve as a good starting point for you:
- DDP (Defect Detection Percentage): Measures the effectiveness of defect detection during testing and the scaped one.
- Number of New Bugs After the First Cycle: Tracks defects scaped and not watched in the first cycle categorized by severity.
- Number of Invalid Bugs: Monitors the accuracy of reported issues.
- SLA Met: Evaluates adherence to Service Level Agreements with operation and TS teams.
- Initiatives and Scope of Impact: Assesses contributions outside day to day testing, such as process improvements or tool implementations and the impact across the organization.
and there are many more.
Team members were evaluated against these KPIs on a scale of 1ā5, with predefined expectations based on their role level (e.g., Junior, Senior). Each KPI had an expected percentage or target number for evaluation purposes.
You may refer to the ISTQB Expert Level Syllabus on āImplementing Test Process Improvementā as it outlines various test effectiveness metrics.
Finally I like @conrad.braam point "KPIs are invented by lazy managers who do not know anything about either their staff, or their product