What is quality and how do we know when we have achieved it?

As part of my drive to make people more mindful of quality, I wrote an article to spark debate. I have reworded it for public consumption, and thought people might wish to read it (and debate it).

What is quality and how do we know when we have achieved it?

Quality is, in essence, abstract. It is the measuring of something against things that are similar. Therefore it is difficult to say something is quality, unless there is something to compare it against. We know a footballer is a quality footballer because he compares favourably to others in his position. Quality tools last longer than their cheaper counterparts.

How, therefore, do we judge what constitutes quality software? More importantly, how do we determine the direction in which quality is heading - is what we produce increasing or decreasing in quality? To different people, it can, and usually does, mean entirely different things, or, at the very least, the importance of the things that are identified as having an effect on the quality of a piece of software will vary.

One way of doing this would be to try and encapsulate what it is the software you design is for. For example, if you provide finance calculators, then your software’s purpose is to provide accurate financial calculations. With this in mind, anything that anything that makes this easier is a quality additive whereas anything that makes it more difficult is a quality reductive. There are, in the basest form, three sets of people involved in this process, and will judge the product on its quality - the customer, the client, and the software house, each of whom will have a different set of priorities.

The customer, whilst they may not actively analyse the elements that come together to form a quality experience, will feel them. One of the prime, and most obvious examples of this will be page load times and accessibility. I have little doubt that, at times, we have all attempted to access a page and given up because it wasn’t useable immediately. Once it has loaded, we expect ease of use. This is more ambiguous. As people working in the software industry, however, this is both more obvious (as a QA, I find it difficult not to judge any websites that I visit) and easier to dismiss - we are aware of how sites generally work even when they’re not intuitive. To determine whether a site meets the quality needs of the customer we must think and behave as the customer would. Would you enjoy using the site? Is everything that you would want, or expect, there? It’s difficult to quantify this ‘feeling’. The nearest metric to this that we have available, I feel, is page use statistics. If there is only a minimal time spent on a page prior to exit, with no progress, it could be an indicator that the user is turned off by the experience. If there is a long time spent on a page with no progress, it could be a sign that it is difficult to use. If a page that is considered to be ‘the goal’ is reached (for example an order confirmation page), within an optimum time, that could be argued to be a demonstration of a good quality user experience. In considering the quality of a page, we should never become disconnected from the user and their experience.

Accuracy and speed of calculations are much more quantifiable, visible, measures of quality or absence thereof. If the total of an order should be £30, then the total produced by the page should be £30. The amount of time taken to perform the calculation should be negligible, unnoticeable, even. Calculation speed and page load time are both things that can be gamified - the time it takes to perform these actions can be timed. Reducing that time is a victory that can be seen.
Throughput is another marker of quality that is, to a lesser or greater degree, possible to quantify. If we experience a high throughput, this can be a suggestion of high quality in one or more links in the development chain. The quality and clarity of the initial specification, the quality of code that the developer is working with, the quality of the code that developers produce and an uninterrupted pipeline will all work together to provide a greater throughput. Conversely, an absence in quality in any of these will slow the whole process down and reduce throughput. The ability to work quickly and efficiently with a steady throughput will also have an effect on overall quality - an impaired throughput can lead to development crunches which will have a negative effect on quality, in that the team will rush to produce the work, as well as being under stress. Ideally the measure of throughput requires items that are of a similar, or comparable, size coupled with an accurate estimation of time. In addition, estimates will become more accurate as they are used. If throughput drops, if estimates are accurate, and items of work comparable, then will be easier to determine where a lack of quality has caused this, and where improvements can be made.

A measure of the absence of, rather than existence of, quality, is the defect. These horrid things rear their heads in a number of ways. The best way is the squad finding them. Less ideally is a client finding them. The worst is multiple, or even all, clients finding them. They can, and are, being measured with support tickets that are logged as defects, problems identified in the CI pipeline and Post-mortems are all examples of defects. As well as being a visible, detectable and quantifiable example of lack of quality, they are theoretically the easiest to resolve.

Finally, an overall picture of quality can be found through dialogue. Everyone will have an idea of the quality of what we produce. Support, sales, account managers and product owners will have conversations with clients or prospective clients respectively. Design and user experience departments have a good understanding what makes a quality user experience, and what drives users in the direction that the client wants. Developers know what constitutes quality code. Clients and customers will also have an idea of what constitutes quality (and their definition of quality is one to which attention should always be paid) and whether we are achieving it, although, for various reasons, soliciting their opinions is not always easy.

In conclusion, although an absence of defects is one good indicator of quality, and one towards which software houses drive, it is not the only one. Positive and negative effects on quality are not always quantifiable, but their influence can be felt, either internally or externally.


Long and interesting read.

I had some trouble seeing the utility with it. Yes. Quality can be discussed in a comparative fashion it can also be quantified in a discrete manner in many cases. Number of defect is not it tho. To spin of your footballer example we can describe a footballer into several aspects. Like accuracy in passes (how many out of a hundred hits the target), or sprint speed, average time to run 100 meters. Ability to jump. Wing span. Speed in shots. Some of the dimensions you can find in the game FIFA. You will also soon figure out that different positions in the game requires you do be good on different dimensions. Like you want your goalie to have a wide wing span. You might not care about the sprinting speed as much. The same is true for software. But the variety is way more different. The dimensions in which your product need to be strong and in which they can be weaker depends a lot on your context.

So what you can do is to try to define the quality criteria for your product and your business. Is it response time, is it average number of clicks to achieve function, is it accuracy in calculations or number of concurrent users, etc. And then you can set goals for where you want your software to be. What is High Quality for your product?

Your idea that there is an intangible element at play here to is also important to keep in mind. I would recommend you not to be to rigid with these dimensions. You can say that the page should load in less than 1 second for 99.99% of the users, but to say that it’s a defect because it is 99.98% is typically seldom useful. This is mainly because these are not linear dimensions. Going from 50% to 51% is not the same as going from 98% to 99% and the cost of the latter is typically way higher than the former. In my experience if you talk about quality as a whole it will be more opinions and different subjective elements that are in play.

Compare these two questions:
Do you want us to make a higher quality product?
Do you want us to decrease the load time from 1 second to 0.5 seconds?
For me the latter is typically way more useful than the former.

Thanks for introducing the topic!

1 Like

Thank you for taking the time to read and comment on what is a lengthy post. Have to say, I’m fortunate in getting to watch one of the greatest passers in modern football on a regular basis, being a Wolves season ticket holder… You’re right when you talk about the escalating cost of improvement in page load. I think many good devs are of a mindset where it becomes sport to them to speed things up. In defining the quality criteria for the company I work for, I have used the company motto - “Intelligently connecting people to their next car”. If what you’re doing makes it easier for that to happen, then that’s a step up the quality ladder.

1 Like

I don’t quite agree. There are good ideas in your post.

I don’t think you can quantify quality. I also don’t agree that people know ‘the quality they produce’. I think most people have a fuzzy, feel-good idea about quality.

I also think it’s much easier to find the lack of quality (if you want). Having said that, I think most people are in denial. They also don’t know what to do with this information. To talk about quality, you need to give a personal example of what it means to miss quality.

Here is an example:

Google pay won’t work on my mobile. That’s because I moved countries. I spoke to Google support and they asked me to create a new account. They even suggested I create a new Gmail.

Suppose I were a developer on this application. What I missed/what was non-intuitive is thinking about the scenarios of someone moving to a different country. I also suspect it would be difficult to test. As a testers it would be very difficult for me to bring this up as a concern. This is especially important for a company like Google - because of it’s global reach.

I had written an article which explains this idea a few years back: https://medium.com/@revelutions/how-to-start-a-software-testing-debate-b6a1d657cea0

I also think customer support shields engineering teams from understanding the shortcomings in their products and services. In addition customers compensate for shortcomings. e.g., app crashes, reboot the phone.

I had also written about why customers don’t complain:

Note: Not trying to push my links.

Damned by faint praise.

I do think people have an idea of whether they are creating something of high or low quality, even if they try to convince themselves it is of a higher quality than it is (or lower, if they are a natural pessimist). The main reason I wrote this article was to put quality at the forefront of people’s minds within the company, even if it’s to say "Christian, I think you’re wrong about x’. I believe even a discussion of quality can help in the process of instilling quality.

I sent a survey to everyone in the business asking them to score the quality of what we do. Jut a simple 1-10 thing. I was expecting the average of the devs to be higher than that of the support team. I was surprised that it was the reverse. I agree that a lack of quality is often easier to spot, and that is the way many (incorrectly) see our role, as people who look for problems. We should, in my opinion, be looking for potential problems and improvements. However, I think by commending quality, you can create an atmosphere where people strive for it. After all,that feeling of being praised can be addictive.

How much quality does your business want to pay for?

1 Like

Are putting forward the idea that ‘we know we have achieved it’ is generally determined by a business (for example, the push for quality ends when a company determines that there has to be a release, or resources are no longer available)?

If you make products your customers don’t want are they quality products?

1 Like

If you make products your customers like/want - are they quality products? It’s a slippery slope. I don’t agree with the agile/lean startup concept of customer feedback over all else. Customer’s aren’t always right; they always have a point.

For engineers, it’s better not to get into these discussions - there is always so much to do/learn/improve in testing. There is always so much that we aren’t aware of - in terms of how customers are affected.