These regression suites grow because itâs very cheap to manufacture and execute an extremely specific check at the expense of long-term costs, and people want to profit from that saving without dealing with the debt. If nobody can figure out what the benefits are then I donât see a problem with binning the whole thing.
These kind of issues are also symptoms of other sicknesses. How do you know that the tests that are consistently passing are even doing anything? After all by ignoring passing tests passing youâre essentially using natural selection to only have passing tests. There has to be more to testing than superstition, or you might as well not run the suite and say you did.
Iâd say this is a context-sensitive decision based on levels of understanding of the software project, the other software project (automated suite), how the team(s) are organised and how much money you want to hurl at it. Thereâs lots of questions to ask here - which tests are failing, why, and who wrote them. What do we think these tests do - provide coverage, make managers sleep better, appease the customers, follow policy, abide by laws, etc? How are they serving the testing mission?
Marking as ignore will make everyone do just that - ignore the problem entirely. If those checks were providing less value than dealing with them thatâs a fine idea, but you need to understand their purpose (not just the abstraction loss in their stated purpose). Perhaps even delete them. Nobodyâs fixing them, nobodyâs running them, get the hard drive space back. This has an added emotional impact - itâs more meaningful to say youâre deleting code. Imagine the difference between putting your possessions in a box in storage, even though you know youâre never going to even look at them again, and choosing what to put immediately in the bin. Ooh, yâknow, having to throw this stuff out, maybe I will take up watercolours again.
Leaving everything as is might be a way to provide enough pain to cause people to do something to stop it hurting, provided the pain is sufficient and the right people are feeling it. You could also delete all problematic tests and get those who wrote them to write them again. You could mandate each team is responsible for its own coverage and breakages, and they have to investigate any problems each time. The tiger team is a short-term solution but itâs shifting the cause of the problem away from the problem creators, and you can only clean up after people for so long before they have to learn to do it themselves.
One option might be to shred the whole endeavour. I think that helps to frame the ideas nicely - well, what are we actually destroying? Doesnât it have value over making me feel all comfy inside? What problems are getting into production, out through the users and back to us? What actually is our coverage? Do we even need to run these any more? Whoâs paid to make this work, and can we ask them why itâs not working? What would it cost to fix? What would it cost to replace? Really? Ooh, maybe I will take up watercolours again.
Edit: To be more solution-oriented Iâll say that looking at purpose can be a great way to facilitate change. If itâs just there because itâs cheap to run, in a âhey, who knows?â kinda way, then any cost is important. If itâs there because our testers know what theyâre doing (including if those testers arenât called testers), then that person holds the purpose. Sometimes tests go in because âehâ, sometimes because âooh better checkâ, sometimes because âif this fails again we lose our biggest customerâ, sometimes âif this goes wrong people dieâ. Purpose gives you a sort of nexus to make business decisions about projects like automation suites.