As part of my drive to make people more mindful of quality, I wrote an article to spark debate. I have reworded it for public consumption, and thought people might wish to read it (and debate it).
What is quality and how do we know when we have achieved it?
Quality is, in essence, abstract. It is the measuring of something against things that are similar. Therefore it is difficult to say something is quality, unless there is something to compare it against. We know a footballer is a quality footballer because he compares favourably to others in his position. Quality tools last longer than their cheaper counterparts.
How, therefore, do we judge what constitutes quality software? More importantly, how do we determine the direction in which quality is heading - is what we produce increasing or decreasing in quality? To different people, it can, and usually does, mean entirely different things, or, at the very least, the importance of the things that are identified as having an effect on the quality of a piece of software will vary.
One way of doing this would be to try and encapsulate what it is the software you design is for. For example, if you provide finance calculators, then your softwareâs purpose is to provide accurate financial calculations. With this in mind, anything that anything that makes this easier is a quality additive whereas anything that makes it more difficult is a quality reductive. There are, in the basest form, three sets of people involved in this process, and will judge the product on its quality - the customer, the client, and the software house, each of whom will have a different set of priorities.
The customer, whilst they may not actively analyse the elements that come together to form a quality experience, will feel them. One of the prime, and most obvious examples of this will be page load times and accessibility. I have little doubt that, at times, we have all attempted to access a page and given up because it wasnât useable immediately. Once it has loaded, we expect ease of use. This is more ambiguous. As people working in the software industry, however, this is both more obvious (as a QA, I find it difficult not to judge any websites that I visit) and easier to dismiss - we are aware of how sites generally work even when theyâre not intuitive. To determine whether a site meets the quality needs of the customer we must think and behave as the customer would. Would you enjoy using the site? Is everything that you would want, or expect, there? Itâs difficult to quantify this âfeelingâ. The nearest metric to this that we have available, I feel, is page use statistics. If there is only a minimal time spent on a page prior to exit, with no progress, it could be an indicator that the user is turned off by the experience. If there is a long time spent on a page with no progress, it could be a sign that it is difficult to use. If a page that is considered to be âthe goalâ is reached (for example an order confirmation page), within an optimum time, that could be argued to be a demonstration of a good quality user experience. In considering the quality of a page, we should never become disconnected from the user and their experience.
Accuracy and speed of calculations are much more quantifiable, visible, measures of quality or absence thereof. If the total of an order should be ÂŁ30, then the total produced by the page should be ÂŁ30. The amount of time taken to perform the calculation should be negligible, unnoticeable, even. Calculation speed and page load time are both things that can be gamified - the time it takes to perform these actions can be timed. Reducing that time is a victory that can be seen.
Throughput is another marker of quality that is, to a lesser or greater degree, possible to quantify. If we experience a high throughput, this can be a suggestion of high quality in one or more links in the development chain. The quality and clarity of the initial specification, the quality of code that the developer is working with, the quality of the code that developers produce and an uninterrupted pipeline will all work together to provide a greater throughput. Conversely, an absence in quality in any of these will slow the whole process down and reduce throughput. The ability to work quickly and efficiently with a steady throughput will also have an effect on overall quality - an impaired throughput can lead to development crunches which will have a negative effect on quality, in that the team will rush to produce the work, as well as being under stress. Ideally the measure of throughput requires items that are of a similar, or comparable, size coupled with an accurate estimation of time. In addition, estimates will become more accurate as they are used. If throughput drops, if estimates are accurate, and items of work comparable, then will be easier to determine where a lack of quality has caused this, and where improvements can be made.
A measure of the absence of, rather than existence of, quality, is the defect. These horrid things rear their heads in a number of ways. The best way is the squad finding them. Less ideally is a client finding them. The worst is multiple, or even all, clients finding them. They can, and are, being measured with support tickets that are logged as defects, problems identified in the CI pipeline and Post-mortems are all examples of defects. As well as being a visible, detectable and quantifiable example of lack of quality, they are theoretically the easiest to resolve.
Finally, an overall picture of quality can be found through dialogue. Everyone will have an idea of the quality of what we produce. Support, sales, account managers and product owners will have conversations with clients or prospective clients respectively. Design and user experience departments have a good understanding what makes a quality user experience, and what drives users in the direction that the client wants. Developers know what constitutes quality code. Clients and customers will also have an idea of what constitutes quality (and their definition of quality is one to which attention should always be paid) and whether we are achieving it, although, for various reasons, soliciting their opinions is not always easy.
In conclusion, although an absence of defects is one good indicator of quality, and one towards which software houses drive, it is not the only one. Positive and negative effects on quality are not always quantifiable, but their influence can be felt, either internally or externally.