Thank you, Ali. This is a question I realise I hadn’t pondered enough on.
Test Automation can take many different forms. One form is to prepare test data another is to put the system into a certain state or you could have your automation scan for broken links. These can be immensely useful and their worth is rather easy to measure.
An automated regression set (on any level) is something different though.
This usually takes a huge amount of time to decide on strategy, implementation and maintenance so the cost vs. benefit exercise is quite important to do multiple times during it’s lifecycle. And readjust where necessary.
At first, measuring the success is as binary as the results of the automation.
Red/fail: a problem is found. The automation successfully prevented an issue to move over to the next step.
–> Measure: Number of bugs found.
Green/pass: The application passed with no problems found. You now have some added confidence in the stability of the product.
–> Measure: Number of checks passed.
However, this completely ignores the hazard of False Positives (there is a problem, but wasn’t picked up) and False Alarms (a problem was found, but it turned out not to be a problem).
Though, when we think further about both measures: they both actually give us more of an indication of how the stability our development/environment/… is NOT how good our automation is. The goal of the automation is not to find problems or to give confidence. It is to be faster and more reliable at menial tasks. The aforementioned are a by-product and should not be the main focus. Treating them as such may be counter productive at the least.
To know how successful the automation is one should consider coverage, reliability and mitigated risk.
Add to that the time invested in creation, maintaining and running the scripts (and chasing false alarms/false positives) and you’ll have a rather useful ‘measure’ of the success of your regression suite.
None of these parameters can be expressed in meaningful numbers though, except for time.
(this completely negates the psychological effects that ‘having a regression set’ has on a team & project.)
Therefore, the repeating question of “should we invest/keep investing in a rigorous automation regression set?” is often very hard to answer. Many different questions should go along, such as “How many times do we build or release to production?” or “What is the root cause of most of our issues?” or “What kind of risk should we tackle?”.
I’m often very grateful of having a good automated regression set and love discussing the strategy it should be part of.
However, I’d be very wary of any measure of success even if it looks incredibly crafty and sensible.
Hope that helps give you some insight.
At least, writing it out helped me get a clearer idea.