Here are some thoughts off the top of my head, not in any particular order:
- To have meaningful discussions, stakeholders need to agree on what “quality” means to them. You can be sure there is currently no shared understanding, and if there is it’s not the same as yours (or as yours ought to be). They probably interpret quality as meaning the number of known bugs or something similar.
I like the “Quality is value to some person that matters” definition from Kaner, Bach and others. This is broad enough to include security, performance, load, stress, usability, accessibility, compatibility, interoperability, maintainability (e.g. code comments, limits to nested functions, variable naming, indenting etc.), resilience, benchmarking against competitors and many other factors.
But some stakeholders may measure quality in terms of profit (often the organisation’s ultimate goal), subscriber numbers, complaints, net approval rating etc. - There is no such thing as “complete” testing. For any non-trivial application, you can do an infinite number of tests. Other terms such as “exhaustive testing” are equally meaningless.
- There will always be tests you don’t have time to do.
- Not every test is worth doing.
- Most bugs will not be able to be found by testing against the documented requirements. To achieve good quality, exploration and investigation must be part, perhaps most, of your testing.
- Given an infinite task, the tester’s responsibility is to make best use of the available time. Verifying the documented requirements in the order they are written is not likely to be the best use of the tester’s time.
- The inverse relationship between speed and quality is obvious, assuming all other factors are fixed. To increase both, you need to vary those other factors, such as employing better people, omitting low-value features and eliminating waste such as unnecessary documentation.
- Quality cannot be reduced to numbers. There are no valid software testing metrics.
- The amount of testing you do is always a balance between risk and cost. There is no “right” amount of testing.
- Anyone can do bad testing, but good testing is really difficult and most people can’t do it. Get the best (and probably the most expensive) testers you can. Anyone can do bad development or project management, but you wouldn’t want them on your project, so why do you think testers are any different?
- You can release the software whenever you want, even if no testing has been done or known bugs have not been fixed. The risk will be higher, but it might still be the right decision. Social media platforms are an example - releasing new features rapidly is far more important than quality.
I was working for one of the world’s best known department stores when it became very clear the new e-commerce website would not be ready for the Christmas period. The chairman informed the E-commerce Director he would be fired if the website didn’t launch on 01 October, so it did. It was horrendously buggy, including VAT and shipping cost errors, and there were loads of complaints, but the expected sales revenue was achieved. It was the right decision in a bad situation. - You will launch with bugs. Some you may know about, the rest you won’t.
- TDD aside, it’s not usually worth automating checks until software is stable. Don’t waste time automating the testing of features in the same sprint they were created, because you will need to keep fixing them and may even need to remove them. This will cause howls of rage from agile zealots who insist everything must be done within a sprint, which is stupid dogma.
- Everything is context-dependent. A practice that improves quality in one context may be a really bad idea in other contexts. For instance, Facebook used to boast that they didn’t do testing (who would have guessed?), but what they didn’t say is that time to market trumps everything else for them.
For further ideas, I recommend reading Lessons Learned in Software Testing by Pettichord, Kaner and Bach. Everyone ought to read it because it’s valuable beyond the context of this thread.
As an aside, the list of factors in the first list item above illustrates why I am opposed to the current fad for testers calling themselves “quality engineers”. There are many aspects of quality that testers are entirely unaware of or have minimal knowledge of. Many aspects are beyond the tester’s responsibility or influence. Testers also lack credibility with regard to some aspects - can you imagine telling the development manager that indenting should be done using tabs, not spaces (or vice versa) and definitely not a mixture? Or that you don’t like the way developers name variables?