Iâd say youâre certainly doing the right thing by advocating for testability early in the SDLC.
âpush it and forget testing itâ
Ouch. Another way to read this is âpush it and forget any risks to customers that we couldâve known about and possibly do something about before it was too lateâ.
Release times are important, but so long as the business knows the risk of pushing without any testing at all (as in, we wonât know anything about the product or what problems may lie), then youâve done your job as a tester.
In terms of metrics, speed is easy to measure as the time taken to release, which could be measured in hours, days or weeks.
For quality of product, Iâd just stick to testing it; uncover risk and threats to value and communicate these to stakeholders as information (not data). Actually say whatâs wrong with the product rather than trying to consolidate this into some sort of numbers on a graph.
Same for testability too: assess, evaluate and describe how testability is improving over time, how easier itâs making your job testing it and how youâre becoming more happy with the outcome.
Managers love cheap, easy metrics unfortunately, however they donât paint a good picture of quality, including things like testability. No wonder youâre struggling as I donât think anyone in the world has come up with a reliable measurement for quality and Iâm not sure anyone ever will. Itâs much more effective to supply information in the form of a story - who may come to harm and to what cost?
Surrogate metrics can be used to support the story of quality in a way that might matter. The classic example is performance, where data on response times and load times can be used to support an assessment. Things get a bit more tricky when trying to identify data to support functionality or usability quality however. You might need to be more specific about which aspects of quality you want to measure.
Testability is a bit more specific so I can come up with a few possible examples, but you still need to identify what it is you want to measure exactly.
Do you want to measure how much testability is being advocated early in the SDLC? How much itâs being taken seriously by developers? How much itâs being implemented? How testable the product ends up being?
You could consider things such as: For every developer meeting to discuss requirements or designs, how many times was testability discussed? How many times wasnât it discussed? How many times was a testability feature mentioned? How many times was it implemented vs ignored?
For testing in general: How many projects were shipped without testing at all? For how long was testing performed as a percentage of the entire project? How many times did development overrun their deadline into testing? Etc.
Again though, none of these metrics will give you a measurement of quality, but they may be enough to give you some sort of insight into how things are going that can be monitored over time.
Sorry for the long post, but thereâs another (even longer) blog post by James Bach that you may be interested in (but well worth the read): Assess Quality, Don't Measure It - Satisfice, Inc.