I’d say you’re certainly doing the right thing by advocating for testability early in the SDLC.
“push it and forget testing it”
Ouch. Another way to read this is “push it and forget any risks to customers that we could’ve known about and possibly do something about before it was too late”.
Release times are important, but so long as the business knows the risk of pushing without any testing at all (as in, we won’t know anything about the product or what problems may lie), then you’ve done your job as a tester.
In terms of metrics, speed is easy to measure as the time taken to release, which could be measured in hours, days or weeks.
For quality of product, I’d just stick to testing it; uncover risk and threats to value and communicate these to stakeholders as information (not data). Actually say what’s wrong with the product rather than trying to consolidate this into some sort of numbers on a graph.
Same for testability too: assess, evaluate and describe how testability is improving over time, how easier it’s making your job testing it and how you’re becoming more happy with the outcome.
Managers love cheap, easy metrics unfortunately, however they don’t paint a good picture of quality, including things like testability. No wonder you’re struggling as I don’t think anyone in the world has come up with a reliable measurement for quality and I’m not sure anyone ever will. It’s much more effective to supply information in the form of a story - who may come to harm and to what cost?
Surrogate metrics can be used to support the story of quality in a way that might matter. The classic example is performance, where data on response times and load times can be used to support an assessment. Things get a bit more tricky when trying to identify data to support functionality or usability quality however. You might need to be more specific about which aspects of quality you want to measure.
Testability is a bit more specific so I can come up with a few possible examples, but you still need to identify what it is you want to measure exactly.
Do you want to measure how much testability is being advocated early in the SDLC? How much it’s being taken seriously by developers? How much it’s being implemented? How testable the product ends up being?
You could consider things such as: For every developer meeting to discuss requirements or designs, how many times was testability discussed? How many times wasn’t it discussed? How many times was a testability feature mentioned? How many times was it implemented vs ignored?
For testing in general: How many projects were shipped without testing at all? For how long was testing performed as a percentage of the entire project? How many times did development overrun their deadline into testing? Etc.
Again though, none of these metrics will give you a measurement of quality, but they may be enough to give you some sort of insight into how things are going that can be monitored over time.
Sorry for the long post, but there’s another (even longer) blog post by James Bach that you may be interested in (but well worth the read): Assess Quality, Don't Measure It - Satisfice, Inc.