Do you measure and report on quality?

Do you measure and report on quality? How? Metrics? Estimate? Best Guess? Feelings?

What can we learn from other industry(i.e Hotel rating, online star rating etc)? Is there a such standard on quality? Do we need one for consistency? WHY? WHY NOT?

Thoughts…

A lot depends on what sort of quality you’re measuring, in what context, with what end objective and for what end user. For instance, in a previous existence in utility regulation I was closely engaged with “quality assurance” which in this context was about data quality - how robust were reported numbers. We applied “confidence grades” to reported numbers, which combined two measures: 1) how were the numbers collected - direct from source instrumentation by direct observation, by statistical correlation of documentation (such as, ‘on this mains replacement project, how many kilometres of mains were replaced?’, which would be assembled from documentation of how much pipework was ordered for the job rather than someone walking the length of the site with measuring wheel - this was before GPS!), or by more general number-crunching of statistics at a higher level; and 2) how robust were data collection methodologies - how collected, by whom, how validated, how checked, how reviewed and by whom and how signed off and by whom. I worked with specialist rapporteurs from the civil engineering profession who independently reported on these methodologies.

(Over time, as most companies reporting these numbers got a better handle on them and more reported confidence grades migrated to the best possible - A1 - we dropped the requirement for these to be reported and commented on.)

Only after the question of how confident we could be over the robustness of the data was settled - or at least, on a sound footing - did we then turn to the systems we used to collect, collate and store that data and how to use these shiny new “computer” thingies to help that. And that’s when I started testing software as the next link in that business chain. The rest, as they say, is history.

1 Like

When I became head of our QA department, this was a topic that was very close to my heart - I wanted to get an idea of where we were, so that I could see if the things I was trying were having a positive or negative impact. It seems to be that it’s so much easier to get hard figures around where there’s a lack of quality. For example, I’ve just (about 1/2 hour ago) presented to the company figures on the number of defect related support tickets and logs. Whilst this is useful in a comparative sense (are they going down at a suitable rate?), it doesn’t tell us the ‘quality’ of what we produce. I also sent out an internal, anonymous, poll, asking people to rate the quality of what we produce on a scale of 1-10. Obviously this isn’t a scientific measure, and is very subjective, but it gave me something to think about (and talk about - asking people why the figures were the way they are).

2 Likes

Very good idea try to ‘measure’ quality in different ways.
May be scoring by users of the system ? Plus continuous monitoring on user experiences? Just a thought

1 Like

I normally jump at stats, however, the user is centric in defining quality of any software.