Developing Quality Standards

Hello Testers!

I recently took over quality management for a mature application with a small team of automated and manual testers and am looking to develop something like Quality Standards for our company as a whole as well as for our team. We don’t have any documentation surrounding quality right now and I’m feeling a little stymied on where to start.

I ran brainstorming sessions with various teams throughout the company and came out with a wide range of items viewed as contributing different amounts to Quality - from broad items like ‘delivering what is advertised’ to nitty gritty things like ‘consistency in what characters we support.’

Do your companies have documentation for something like this?
Is it at the team level, department level, company level?
How is it organized?
I’d love any feedback, examples, or resources you may have!


Welcome Wren.

Whenever you try to measure quality, all you end up doing is working out how much diesel is still in the tank. And that can change at any time, especially if people decide diesel is bad. So assertions like which characters or languages or fonts, or OS versions, or hardware vendors, or network types, we like to say we support can be counter productive. I prefer to measure quality in terms of outputs. Things like how many customer reported unique defects are coming in (CRUDS for short.) Or maybe look at agility in your teams, by looking at how often a team is able to release. And even how good inter team communication is, by looking at how often teams do feature demos.
There are more nitty gritty ways of measuring quality, but over the longer term, these 3 measures are very hard to gamify.


In a previous role, the organisation’s primary focus was on data quality, as we were collecting data from a variety of utility companies to inform regulatory decisions. The initial focus was therefore on data collection and how that data was validated - what sort of collection systems companies had, how robust they were, and how much confidence we could attach to any given number. Defining what information we needed to collect, how it should be collected, and then collated for onward transmission to us was what I spent the first eighteen months in “Quality Assurance” doing, liaising with consulting engineers on questions of data collection in the Real World.

It was only later, as the project progressed, that I got involved with software testing (after all, the first version of the data collection tool was a series of paper forms; completing the forms via one of these new “spreadsheets” was an option for the more technically-adventurous companies we dealt with). The objective of nearly 90% of my testing was to be able to demonstrate to our senior decision-makers that our systems were robust and delivered consistent and accurate numbers that they could rely upon in a high-profile political environment. Testing for functionality was very much an afterthought.


Thanks! I’m less interested in measuring the success of our quality team’s work - we’re actually in a pretty great place there with functional testing. One of the issues we’ve encountered in dev cycle though is confusion on what actually is a bug. For instance if the qa rep bugs the inability to save a form by hitting ‘enter’ on the keyboard, or the telemetry recorded on that ‘save’ being confusing, if that particular item isn’t in the requirements, it requires a conversation with the dev and po and if it comes up on two different stories, might be decided two different ways and then we have inconsistencies across the app. I’d like to provide some agreed upon standards at that nitty gritty level to reduce those inconsistencies and the conversations around them. I’ll need buy-in from prod and dev on whatever I develop but I’d like to get something started. Does your company have anything like that? Is it called something different?

1 Like

Nope, it’s a common flaw. I sort of detected that you might be suffering the same communication (insert greek word here) that I do. Someone with some clout points out an app inconsistency, which was not detected by QA, because the inconsistency they pointed out is consistently inconsistent. At the end of the day this impacts time to release. The devs change the size of the button, and move it to a different page; and then the regression testing starts again, holding the whole team back. Trying to fix the text/font/size/color on a button very often is a symptom of a completely different problem that can only be solved by focusing on a “goal”, such as time between releases.

I’ve never worked anywhere where product owners are prepared to help QA out by getting marketing to give us this level of detail. I don’t believe it’s achievable on paper alone. (That’s not entirely true, if the company can agree on a few shared design style books to work from, that can help a lot to create common understanding.) Everyone thinks that cross-platform and responsive apps are easy to build and test, they just aren’t. I believe that regular demos and some accompanying signing-off on designs is the only way to get better at this problem in a measurable way, docs and books are just a tool.


hi @wrenarf,

Have you looked into DORA metrics by Nicole Forsgren? - I would go that way if a team of mine was into the question. eg here:

1 Like

First decide if you even have a problem. You’re saying that your teams are having conversations, which most software houses would love to hear. Conversation can lead to communication, better understanding, fewer silos, reduced cycle time, good times ahead. What’s the actual problem you’re trying to solve and *for whom * is it a problem?

This is extremely important. Don’t make work for yourself and others that nobody needs to do. If it’s just because someone in a tie is bitching about it that’s a whole different issue.

Strategy & Cost
Don’t write anything down that you don’t really need because you will create a bunch of costs. Writing it, reading it, updating it, enforcing it, testing it. What it feels like you’re trying to do here is preemptively decide on what testers should be looking at - therefore you’re building a test strategy and trying to communicate it to your testers.

If you keep this high-level you will create less cost, eg. if you tell testers to do claims testing they should be able to do that. If you tell them exactly what adverts you have up and they need to check that the product does the things in each advert then you have to supply a list of all the adverts AND you haven’t accounted for any new claims you’ve made, or you’ll forget to say to look at the website. A good way to mix this would be a charter like “Do claims testing, ensuring you look at current adverts and our website” - this keeps itself up to date much better, and allows for exploration that your testers can do. Takes the work off you and makes testing work more engaging.

Formal Requirements
I’d encourage you to fall out of love with formal requirements. Requirements are a complex web of tacit and explicit knowledge, artefacts and individual desires. You can look at formal requirements, if you have to use them, as a checklist to ensure you’ve covered your arse and try to make a quality product through understanding, communication and design.

If your teams come to some problem like “I can’t save by hitting enter” then that expectation comes from somewhere. Sometimes it’s a written requirement (“Must be able to save by hitting enter”) and sometimes is a written test (“Test that you can save by hitting enter”) and sometimes it’s from similar products, continuity in the OS, previous experience, current experience in using the product, whatever. You cannot and will not write down all the requirements, obviously, so you need to account for what your testers believe is a problem. If it’s a problem to someone who matters (e.g. your client) then it’s a threat to the value of your product. How you deal with the bureaucracies is up to how you work, but there’s probably a business person your teams can go to to determine how the design of the product should be. It has to be someone’s role to interface with both the product and the client so the client will get the product they want. This will all be part of a wider discussion. Will it take more coding time, is there some reason it’s been designed this way, should it match with the predicted expectations of computer users (e.g. Ctrl-Z usually means “undo”), and so on.

When it comes to consistency across a product I’d say that’s more often a product of design or redesign and its communication to your teams. You cannot predict every conversation about a product - a project is a chaotic, churning beast that cannot be tied down with requirements documentation, nor a conversation about what a bug is or is not. Start high level, up in the clouds where it’s cheaper. Perhaps your development teams can take on some of the obvious decisions - just because there’s nothing in the requirements document about capitalising countries, there’s no need to insult the people of the United Arab Emirates with small letters - you don’t have that conversation you just get on and fix it.

Maybe your business people and your development people are sitting too far apart trying to communicate with opaque and limited documents rather than having a short conversation with one person who both holds the vision for the finished product and understands the concept of profit.

Perhaps you need to involve your testers at the design phase so they might ask these questions when it’s cheaper - before it’s written. Perhaps your testers need clearer understanding of what’s important to the client, or what’s important to your business. Perhaps your testers and coders need to talk more earlier on so that problems get solved as it’s written.

Perhaps your design decisions, such as supported characters, need formalising but maybe they don’t - and there may be exceptions. You may need to suddenly support or not allow a character based on use case. Perhaps what characters you support are determined by third-party software like a plugin or browser or OS you run the product on. And maybe it really doesn’t matter what characters you support, just that each time you need to use characters you can always achieve what the product needs to do - so you’re then better off having your testers look at testing with the minimum necessary characters to do a job. Numbers, + and - might be enough for phone numbers, but you need more for an address. Then you have the problem of what happens when you use an unsupported character, so you might want to test for that anyway. So a formal document is going to be hard to write, difficult to use in practice and become out of date at some point.

I suppose I’m saying if you want consistency then design with consistency, then communicate that design to the make-stuff teams.

It is hard
It’s a really complicated problem to do with communication, understanding of the product, project and client, paperwork, how much you trust your teams, how important specificity is to your client (e.g. a bank will have more to say about how you do things than a hot dog shop), and lots of other context-specific things, which is why it’s so hard to answer this, but I hope I gave you something that might help.


Maybe you need to read the ISO 9126 (replaced by which is the product quality standards ISO