Developing Quality Standards

Hello Testers!

I recently took over quality management for a mature application with a small team of automated and manual testers and am looking to develop something like Quality Standards for our company as a whole as well as for our team. We donā€™t have any documentation surrounding quality right now and Iā€™m feeling a little stymied on where to start.

I ran brainstorming sessions with various teams throughout the company and came out with a wide range of items viewed as contributing different amounts to Quality - from broad items like ā€˜delivering what is advertisedā€™ to nitty gritty things like ā€˜consistency in what characters we support.ā€™

Do your companies have documentation for something like this?
Is it at the team level, department level, company level?
How is it organized?
Iā€™d love any feedback, examples, or resources you may have!

4 Likes

Welcome Wren.

Whenever you try to measure quality, all you end up doing is working out how much diesel is still in the tank. And that can change at any time, especially if people decide diesel is bad. So assertions like which characters or languages or fonts, or OS versions, or hardware vendors, or network types, we like to say we support can be counter productive. I prefer to measure quality in terms of outputs. Things like how many customer reported unique defects are coming in (CRUDS for short.) Or maybe look at agility in your teams, by looking at how often a team is able to release. And even how good inter team communication is, by looking at how often teams do feature demos.
There are more nitty gritty ways of measuring quality, but over the longer term, these 3 measures are very hard to gamify.

3 Likes

In a previous role, the organisationā€™s primary focus was on data quality, as we were collecting data from a variety of utility companies to inform regulatory decisions. The initial focus was therefore on data collection and how that data was validated - what sort of collection systems companies had, how robust they were, and how much confidence we could attach to any given number. Defining what information we needed to collect, how it should be collected, and then collated for onward transmission to us was what I spent the first eighteen months in ā€œQuality Assuranceā€ doing, liaising with consulting engineers on questions of data collection in the Real World.

It was only later, as the project progressed, that I got involved with software testing (after all, the first version of the data collection tool was a series of paper forms; completing the forms via one of these new ā€œspreadsheetsā€ was an option for the more technically-adventurous companies we dealt with). The objective of nearly 90% of my testing was to be able to demonstrate to our senior decision-makers that our systems were robust and delivered consistent and accurate numbers that they could rely upon in a high-profile political environment. Testing for functionality was very much an afterthought.

4 Likes

Thanks! Iā€™m less interested in measuring the success of our quality teamā€™s work - weā€™re actually in a pretty great place there with functional testing. One of the issues weā€™ve encountered in dev cycle though is confusion on what actually is a bug. For instance if the qa rep bugs the inability to save a form by hitting ā€˜enterā€™ on the keyboard, or the telemetry recorded on that ā€˜saveā€™ being confusing, if that particular item isnā€™t in the requirements, it requires a conversation with the dev and po and if it comes up on two different stories, might be decided two different ways and then we have inconsistencies across the app. Iā€™d like to provide some agreed upon standards at that nitty gritty level to reduce those inconsistencies and the conversations around them. Iā€™ll need buy-in from prod and dev on whatever I develop but Iā€™d like to get something started. Does your company have anything like that? Is it called something different?

1 Like

Nope, itā€™s a common flaw. I sort of detected that you might be suffering the same communication (insert greek word here) that I do. Someone with some clout points out an app inconsistency, which was not detected by QA, because the inconsistency they pointed out is consistently inconsistent. At the end of the day this impacts time to release. The devs change the size of the button, and move it to a different page; and then the regression testing starts again, holding the whole team back. Trying to fix the text/font/size/color on a button very often is a symptom of a completely different problem that can only be solved by focusing on a ā€œgoalā€, such as time between releases.

Iā€™ve never worked anywhere where product owners are prepared to help QA out by getting marketing to give us this level of detail. I donā€™t believe itā€™s achievable on paper alone. (Thatā€™s not entirely true, if the company can agree on a few shared design style books to work from, that can help a lot to create common understanding.) Everyone thinks that cross-platform and responsive apps are easy to build and test, they just arenā€™t. I believe that regular demos and some accompanying signing-off on designs is the only way to get better at this problem in a measurable way, docs and books are just a tool.

2 Likes

hi @wrenarf,

Have you looked into DORA metrics by Nicole Forsgren? - I would go that way if a team of mine was into the question. eg here:

1 Like

Problem?
First decide if you even have a problem. Youā€™re saying that your teams are having conversations, which most software houses would love to hear. Conversation can lead to communication, better understanding, fewer silos, reduced cycle time, good times ahead. Whatā€™s the actual problem youā€™re trying to solve and *for whom * is it a problem?

This is extremely important. Donā€™t make work for yourself and others that nobody needs to do. If itā€™s just because someone in a tie is bitching about it thatā€™s a whole different issue.

Strategy & Cost
Donā€™t write anything down that you donā€™t really need because you will create a bunch of costs. Writing it, reading it, updating it, enforcing it, testing it. What it feels like youā€™re trying to do here is preemptively decide on what testers should be looking at - therefore youā€™re building a test strategy and trying to communicate it to your testers.

If you keep this high-level you will create less cost, eg. if you tell testers to do claims testing they should be able to do that. If you tell them exactly what adverts you have up and they need to check that the product does the things in each advert then you have to supply a list of all the adverts AND you havenā€™t accounted for any new claims youā€™ve made, or youā€™ll forget to say to look at the website. A good way to mix this would be a charter like ā€œDo claims testing, ensuring you look at current adverts and our websiteā€ - this keeps itself up to date much better, and allows for exploration that your testers can do. Takes the work off you and makes testing work more engaging.

Formal Requirements
Iā€™d encourage you to fall out of love with formal requirements. Requirements are a complex web of tacit and explicit knowledge, artefacts and individual desires. You can look at formal requirements, if you have to use them, as a checklist to ensure youā€™ve covered your arse and try to make a quality product through understanding, communication and design.

If your teams come to some problem like ā€œI canā€™t save by hitting enterā€ then that expectation comes from somewhere. Sometimes itā€™s a written requirement (ā€œMust be able to save by hitting enterā€) and sometimes is a written test (ā€œTest that you can save by hitting enterā€) and sometimes itā€™s from similar products, continuity in the OS, previous experience, current experience in using the product, whatever. You cannot and will not write down all the requirements, obviously, so you need to account for what your testers believe is a problem. If itā€™s a problem to someone who matters (e.g. your client) then itā€™s a threat to the value of your product. How you deal with the bureaucracies is up to how you work, but thereā€™s probably a business person your teams can go to to determine how the design of the product should be. It has to be someoneā€™s role to interface with both the product and the client so the client will get the product they want. This will all be part of a wider discussion. Will it take more coding time, is there some reason itā€™s been designed this way, should it match with the predicted expectations of computer users (e.g. Ctrl-Z usually means ā€œundoā€), and so on.

Consistency
When it comes to consistency across a product Iā€™d say thatā€™s more often a product of design or redesign and its communication to your teams. You cannot predict every conversation about a product - a project is a chaotic, churning beast that cannot be tied down with requirements documentation, nor a conversation about what a bug is or is not. Start high level, up in the clouds where itā€™s cheaper. Perhaps your development teams can take on some of the obvious decisions - just because thereā€™s nothing in the requirements document about capitalising countries, thereā€™s no need to insult the people of the United Arab Emirates with small letters - you donā€™t have that conversation you just get on and fix it.

Maybe your business people and your development people are sitting too far apart trying to communicate with opaque and limited documents rather than having a short conversation with one person who both holds the vision for the finished product and understands the concept of profit.

Perhaps you need to involve your testers at the design phase so they might ask these questions when itā€™s cheaper - before itā€™s written. Perhaps your testers need clearer understanding of whatā€™s important to the client, or whatā€™s important to your business. Perhaps your testers and coders need to talk more earlier on so that problems get solved as itā€™s written.

Perhaps your design decisions, such as supported characters, need formalising but maybe they donā€™t - and there may be exceptions. You may need to suddenly support or not allow a character based on use case. Perhaps what characters you support are determined by third-party software like a plugin or browser or OS you run the product on. And maybe it really doesnā€™t matter what characters you support, just that each time you need to use characters you can always achieve what the product needs to do - so youā€™re then better off having your testers look at testing with the minimum necessary characters to do a job. Numbers, + and - might be enough for phone numbers, but you need more for an address. Then you have the problem of what happens when you use an unsupported character, so you might want to test for that anyway. So a formal document is going to be hard to write, difficult to use in practice and become out of date at some point.

I suppose Iā€™m saying if you want consistency then design with consistency, then communicate that design to the make-stuff teams.

It is hard
Itā€™s a really complicated problem to do with communication, understanding of the product, project and client, paperwork, how much you trust your teams, how important specificity is to your client (e.g. a bank will have more to say about how you do things than a hot dog shop), and lots of other context-specific things, which is why itā€™s so hard to answer this, but I hope I gave you something that might help.

2 Likes

Hello,
Maybe you need to read the ISO 9126 (replaced by https://www.iso.org/standard/35733.htmlhttps://www.iso.org/standard/35733.html) which is the product quality standards ISO