QA means different things to different organisations; it may even mean different things to the same organisation at different times. When I started in what my then employers called “QA”, it was about ensuring data quality collected by third parties and validated by independent industry professionals. Then it became about designing tools to accurately collect and manipulate data according to the needs of end users. Only once we’d been through the loops of consulting with the industry on what data we should collect, and gathered requirements from all our own specialists did we get into a cycle of developing software tools and finally testing them.
At the same time, the QA role also had responsibility for the use of that data in external, public-facing, reports; checking data sources and particularly checking data consistency (are the numbers quoted in the text of this report consistent with the numbers shown in the tables and on the main database?).
This was in a public sector organisation whose role was to collect data to set policies, and ultimately determine utility prices. Nowadays, I work in a private company whose aim is to roll out a software product. The relative roles of “QA” and testing are, and always will be, different in two such organisations.
I see “QA” as a matrix, where data consistency, accurate use and software testing all contribute to the organisation’s overall position on quality. Along the way, there are battles to be fought internally over who owns data and projects, who is responsible for error trapping, and who is responsible for declaring an application or a report or a product or any other output as “fit for use”. Testing and determining the limits, responsibilities and expectations of testers is just one part of this overall corporate landscape.