A lot depends on what sort of quality you’re measuring, in what context, with what end objective and for what end user. For instance, in a previous existence in utility regulation I was closely engaged with “quality assurance” which in this context was about data quality - how robust were reported numbers. We applied “confidence grades” to reported numbers, which combined two measures: 1) how were the numbers collected - direct from source instrumentation by direct observation, by statistical correlation of documentation (such as, ‘on this mains replacement project, how many kilometres of mains were replaced?’, which would be assembled from documentation of how much pipework was ordered for the job rather than someone walking the length of the site with measuring wheel - this was before GPS!), or by more general number-crunching of statistics at a higher level; and 2) how robust were data collection methodologies - how collected, by whom, how validated, how checked, how reviewed and by whom and how signed off and by whom. I worked with specialist rapporteurs from the civil engineering profession who independently reported on these methodologies.
(Over time, as most companies reporting these numbers got a better handle on them and more reported confidence grades migrated to the best possible - A1 - we dropped the requirement for these to be reported and commented on.)
Only after the question of how confident we could be over the robustness of the data was settled - or at least, on a sound footing - did we then turn to the systems we used to collect, collate and store that data and how to use these shiny new “computer” thingies to help that. And that’s when I started testing software as the next link in that business chain. The rest, as they say, is history.