A few months ago, I was given the role of Head of QA. One of the directors and I sat down to discuss the enterprise level goal of the QA department. We settled on “aiming for a zero defect company”. Obviously this is what we all, as testers, aspire to, but, as soon as code passes the “Hello world” stage, it becomes to all intents and purposes, an impossibility, but it’s there as something to push towards. Anyway, I digress (as I often do). One of the things we also discussed was visibility. One thing led to another, and now, as part of the company’s weekly demos, I’m going to be introducing this concept to the company. What I was going to do was show a couple of the graphs I’ve cobbled together (support tickets plus postmortems by month and postmortems by contributing factor), and discuss the need for the better gathering in statistics. Does anyone else have to do regular presentations company demos, and if so, what sort of thing do they present?
I’ve had to present, or organise my team to present, at various company/team meetings. If you are going to put together stats and present them, two things spring to mind:
- Keep the gathering of these stats as simple as possible - if it’s taken more than an hour to put everything together, you may want to scale down.
- Keep the presentation of the stats as quick and high level as possible - while you mention you want to raise visibility of testing, presenting complicated stats that people don’t care about will be as effective as not showing them at all.
One thing you may want to do is demo areas of testing, such as show how automation runs, or the cycle of a defect, or something similar which isn’t a presentation of stats.
It can be a tough job getting testing recognised in an enterprise environment, just take your time and keep winning hearts and minds one at a time.
P.S. another idea (never tried), during one presentation you could ask the audience if there’s an area of testing they’d like to see demoed next time?
Thank you. I had originally put together a load of stats, gathered from all over the place - support tickets, postmortems, all sorts of stuff. With the help of our support team manager, I’ve pared it right back to defect tickets, so I have a single number (I have more in-depth stuff for me, but that’s between me and Google sheets). As far as presentation goes, I can just say we have x defects, down from (let’s be optimistic) y last weeks. As I start to get an idea of what I can get out of the various numbers we have, I can start to say things like ‘z defects were due to insufficient unit tests’ or ‘acceptance criteria being incomplete’ (there’s no evidence of this within the company, but they were general causes I could think of).
Save your data so that you can compare year-2-year, or release-2-release.
It can be comforting to say “this up-tick in number open tickets is expected at this time”
Also the number of defects/tests isn’t important, but the pattern is.
Most importantly: Spend most of the time talking about what you want more of.
We maintain both enterprise level and business unit level data to help show the impact of quality initiatives. In this manner, we can demonstrate how better quality is saving money on the bottom line. The data also helps show how specific initiatives align with changes in over all quality.
As always, @jesper has sagacious advice. With regard to duration, I have also seen the use of a 12 month running average to reduce noise over a long period of time while demonstrating improvement.
Thank you both.
You mention money, and one thing I would love love love, would be to be able to do is demonstrate the financial implication of defects (and by extension ‘failings’ in quality). Whilst typing this, I’ve just thought that this could be worked out as (time spent on defects * what we charge for dev time), as what we are using. In very simplistic terms, cost of defects could be seen as dev time spent on reactive, as opposed to proactive work.
@chris_dabnor this thread might interest you: