To provide what your team is doing, what type of reports/metrics you will share with management?
I’ve generally found there are three layers of reports.
The first is that rapid feedback loop of useful information that the team uses day to day. Verbal, slack, short test session reports with issues of significance raised.
The second is closer to milestone reporting, its a bit broader and deeper than the first one and an overall quality evaluation of the product.
One of the key things though is separate test reports apart from that first layer should really be just a section of overall team and product reports and not something separate.
The third layer of reports has always been grey to me. Management wants a separate report, often they may not understand testing, have a lack of trust and demand visibility and evidence of testing. Apart from the angle of keeping management off the testers back which is in itself invaluable its often just waste, they read them for the first couple and then not look at them again until something goes wrong and then they’ll go back to wanting more visibility again. Automate these reports if you can, they remain mostly waste though and for managers who cannot make the effort to have a discussion about testing every now and then.
After a chat and they actually look at what we have, how we are communicating this information and how we contribute to team level reports they tend to say aahh, okay that makes sense. But that third layer request seems to occur again usually from an anonymous manager somewhere or based on an myth of lack of insight.
I have to admit, I’m a stats nut . I am actually the person responsible for providing engineering metrics to management, whilst also leading the QA team in my organisation.
I agree with @andrewkelly2555 on the day to day reporting. Rapid feedback through slack verbal or messaged, the focus being to get things done in the sprint.
However, reporting to management is a different ball game. What they want metrics on is very simple: Cost/Effort, Quality and Timescale relating to software output, at a high level.
Now in the past I’ve made the mistake of researching and coming up with metrics in isolation, and putting them in front of management and asking “Is that what you want?”. The problem with that approach is, anything thats flagged up by those metrics, you usually need engineers to respond. If the engineers have had no input into these metrics, they’re not priority to them and they’re not a priority for scrum masters either. I also as a “stats nut” did far too many metrics. So recently we did a reset and new approach.
- We scrapped all the metrics and started again, this time starting with engineers. Asking the question how do we now as a sprint team and as a function, how well are we doing and what areas we need to improve? The focus being if we agreed what we value, then we can give those metrics to management and believe in them.
- Also to trim them down, focus on the meaning of KPI. Whilst I produced in the past a lot of interesting stats (well I found them interesting), you had to test them with when is it “good” and when is it “bad”. If the engineers couldn’t answer that, we accepted its a not a KPI.
- We found that what engineering valued was in line with DORA metrics
So we report:
- Deployment Frequency
- Lead time for changes
- Change Failure Rate
- Mean Time to Recover
Also, not specifically DORA metrics:
- Sprint completion levels
- Regression Testing elapsed time (includes any waiting time for environments, bug fixing etc.)
- Feature Growth
Hope that helps!
There are different types of reports sent to management for team updates, however, every report is sent by a different person:
- QA Report (Usually sent at the end of the sprint which is usually two weeks) - which is sent at the end of each sprint and focuses on how many tickets were tested by qa team, and which tickets couldn’t be closed, apart from that test metrics like how many test cases were executed by both manual and automation, how many get passed, failed, skipped. It is sent by the person who lead the project in testing in that sprint.
- Automation report (Usually sent at a gap of 7-10 days) - which particularly focuses on the progress made in the automation, the challenges that the automation team is currently facing, and their dependencies. It is sent by the Automation lead.
- Upskilling report (Usually sent at 2-3 months) - which qa is upskilling on which topic whether it is AI or automation or anything else. It also contains details of certifications that testers have obtained or planning in the next couple of months. It is sent by the QA Manager.
- Test Plan - even though it is not a report but still a document to be shared with the management at the start of the testing process to aware the management about the details of the testing process. It is sent by the person who leads the testing team in the project.
Visual Report :
-
Test Tickets
All in test tickets -
Bugs Chart
Bug tickets with all status (created, on solving, in test, fixed) -
Test Cases Chart
Test cases with all status (created, updated, done) -
Automation Test
Nightly run automation report (failed, passed, skipped)
Detailed Report
For stakeholders or anyone need more detailed report (current testing status and condition, QA capacity, bugs status, team status). Target and plan report status