How to measure QA adoption in a team

Hi mates,
Currently we are adopting QA activities in our organization and the management would like to have measured the progress in some way.

Please, can you share whith me any experience where you been measuring the QA adoption in an organization or by teams?

To me, this is very a subjective topic, because the teams are very different and we cannot use the same criterias for all of them. We need to find some more objective way. Till now, the adoption was evaluated by coaches that are supporting the teams.

Thanx in advance mates!

1 Like

I have been doing part-time QA, besides my testing role.
And by QA I am referring to Product/Process Quality Assurance, in relation to Quality Engineering and Product Development.

A few examples:

  • Improvements of the data quality; Bad data was impacting the product in a few ways: while testing, while developing, having to build and consider workarounds, and negatively impacting user experience when seeing weird behaviors or data/text displayed. I’ve connected to the data controllers, to the technical guys managing the systems or jobs that were uploading content automatically, to the data and systems distribution department, content managers, etc… I’ve helped them with categorizing problems, pinpointing problems, exemplifying with scripts results the data problems. The measurement of the improvements is based on the number of visible and occurring data problems still left today in the product. Which dropped from dozens to a few every couple of months;
  • Improvements of systems integrations; Multiple products and services internally and externally, some maintained some not were going together. The call center were dealing with hundreds of failed purchases per year. Through investigations, analysis of each problem category, finding sources of problems, collaborating with different departments, adjusting myself code and product settings, we’ve managed to take the number of errors down to about 10/year, appearing in very extreme cases. These cases were deemed as too expensive to fix.
  • Improvements of the integration, packaging, release processes; We managed to decrease confusion and release times from 1-3 days to 2-3 hours.
  • Improvements of the code and products; I am doing code inspection, reviews and fix problems from time to time when I find then -> less known bugs in the backlog/product;
  • Improvements in external services integrations; I’ve been helping developers with technical investigations of the external services, finding the bugs of those, things to avoid, features to request, workarounds to create, suggestions of usage/implementation and examples - facilitating and decreasing the development time by days/weeks.
  • Internationalization, globalization, localization, translation; seeing the pain points in these areas when we were implementing and releasing application increments I started to check the subject; I helped with tooling, guidance, usage documentation, reviewing all content and code with changes, adjusting and translating; This decreased by a lot the number of translatable static content issues in the production system, decreased release times, increased awareness and knowledge of people, and managed to onboard a business person to aid in this;

Some ideas to help thinking about it:

  • are things going faster, more often, with better clarity (development, releases, interactions, communications, conclusions to tech or business debates, meetings)?
  • are people happier to work on the product/project, feel more accomplished, are they willing to do the extra things to improve?(devs, ui/ux, tech-leads, support, tech writers, content editors, testers.
  • are the stakeholders(direct managers, department managers) that depend on the product happier and praising the work & product more than before; encounter fewer problems; Are they seeing gains - people wise, budget, user base increases, support calls decreases, satisfaction panel scores, etc…
2 Likes

Stefan put data quality at the top of his list, and I’d endorse that. My first testing role actually started out with a major element of data quality as the organisation I was working in relied on data collected from third parties. Specialists would take decisions based on that data, and we had mechanisms in place for data quality to be independently audited. My role was a bit of “who watches the watchers?” and so I was working with external engineers on methods of assuring data quality.

When the emphasis switched to data collection applications, the work I’d done on data quality fed back into the organisation’s buy-in to software testing as a part of end-to-end quality assurance measures. (the only problem was that we got so good at it that our contribution became invisible because confidence in the numbers was high and everyone assumed that it ‘just happened’.)

Hi Ditka,
it is difficult to answer cos I don’t know which processes do you want to improve. It is crucial to know, what management expect to achieve, what is success. In general I recommend you KPI Library (http://kpilibrary.com/) or try to use OKR methodology.

You said there are different teams, but it is not obvious what is different. If the process is the same, you can establish some maturity model and measure which team is on which level. It the process is not a same, this model won’t help and maturity model must be prepared for each of them.

ms