What are your test strategy horror stories?

Hi all,

I need your help in sharing community stories on Automation.

We’re currently exploring why strategies fail and how we can learn from failure to encourage teams to buy into a strategy. So our question is this:

Have you ever failed to get a team to adopt a strategy? If so, what happened, and why do you think it failed?

There are many reasons why a team might resist adopting a strategy, but we’d like to hear real stories as to how and why it happened to you.

We look forward to hearing your horror stories and thank you for your contribution :robot:

4 Likes

I have probably failed the most on has been to be too far ahead of a team stuck in old ways of working. Why Do We Fall, Master Bruce? | Complexity is a Matter of Perspective

We get caught up in terminology discussions, application of standards and obligations, and who gets to do the work – that we forget to align with the business side of things. And thus the beatings continue until morale improves. Align you Test Strategy to your Business Strategy | Complexity is a Matter of Perspective

A last reason :blush:, is that strategies fail because they don’t elaborate on the reasons behind our choices. The questions that we come back to wrt strategies are the inherent ones around problem-solving: What’s the problem? Who has the problem? Who should solve the problem? Which is both a people problem and a generic problem for all kinds of strategies. Implementing Change – First Steps | Complexity is a Matter of Perspective

4 Likes

Jesper has covered most of my not-so-great experiences, but I’ll add one from personal history; any gap in knowledge/understanding with the business stakeholders can wreak havoc with a test strategy, especially when it comes to reporting.
I led a project with a very strong mix of ‘engineers’ and ‘subject matter experts who really knew the business.’ It shouldn’t be a surprise that we followed a sound strategy that I led the development of, and that led to the discovery of hundreds of issues/defects.
But…when the issue/defect trend neither abated nor improved, the business went after us and our lack of completed test cases. We explained that our test cases were blocked due to the defect density and even produced the necessary data to show the rates at which issues/defects were being submitted. The initial gap in understanding only grew (not aided by some sketchy development leads reporting ‘progress’ under some very loose definitions of the term.)
I departed the position a year or so later, as part of a rather large executive turnover. One of the changes was to bring in a delivery executive that put every developer and tester in a large room for eight hours a day. There was nowhere to hide; your build either improved the overall program, or everyone knew that it was a step back. Magically…the entire project regained momentum, and while the go-live was still years away, the situation improved.
Putting the TL:DR at the end: align with stakeholders and set expectations on defect discovery and aging metrics. I hope that in the late stages of my career there can be a continued movement away from blaming testers for larger issues in a program.
Epilogue: After 6+ years, system went live…company was bought two years later and as part of the acquisition the technology was retired and transitioned to the acquirer’s systems…

1 Like

Thanks for these stories and links @jesper and @davadora. We’re going to add these to our stories in our learning journey lesson about why strategies fail and why it’s important to talk to stakeholders about your strategy.

1 Like