Do you have examples of how your test automation strategy failed to get approval?

I’d like to learn about why strategies fail to get buy-in and how we can learn from failure to encourage teams to buy into a strategy.

My questions are:

  1. Have you ever failed to get a team to adopt a strategy? 2. If so, what happened, and why do you think it failed? 3. What did you learn from the failure to encourage buy-in next time?

There are many reasons why a team might resist adopting a strategy. If you have any thoughts or real stories about how and why it can happen, I’d love to hear them. :robot:

3 Likes

Anytime a team sees extra work without corresponding value, they will resits QA strategies and tactics. Its a natural bias for Product to see it as a delay to delivering the product and features, meeting their goals on time. Developers love solving the problems but then want to move on once thats done; so QA work is easily seen as extra stuff “clean up” after the puzzle is done.

Tactics and strategies that have been rejected in my experience are inevitably ones that havent been articulated in such a way as to demonstrate the value to those stakeholders I mentioned. Including how each tactic plays a role in the overall strategy.

Now when I promote a tactic, I start with the premise “What is the problem this is going to solve?” and I place myself in those other perspectives to review that idea - “how will solving this problem affect me?”

5 Likes

To add to Michael’s comment, with respect to articulation of value or pros & cons, you also have to figure out or assess how much value gained or trade-off to be made the stakeholders are willing to take for a given proposal. If you can tailor the pitch to their sweet spot, you might get easier acceptance, but how to figure out that sweet spot?

I once pitched a proposal that would provide enhanced automated test coverage that could find issues sooner and perform more of the manual testing work that needed to be done ahead of time, albeit partial automated test coverage (not 100% coverage from automation due to tooling constraints). But the time required to implement with existing personnel resources was not low hanging fruit enough for management to permit it. Granted the system wasn’t buggy enough for management to see more value from the proposal, and the organization had been used to lots of manual testing, that it was the norm anyways thus why change things up. The stakeholders understood my proposal but felt it wasn’t justified enough to pursue at that current time when I worked there.

3 Likes

Yesterday I was starting some automation on a new project. Some of the tests which on the surface seemed very easy to automate technically still took a few hours of fiddling to get right (for various reasons). Some of these tests were things that literally would take say 1 min to test manually (e.g. very basic things like testing 3 bad login attempts locks you out and then the admin can release the lock). So things like this definitely can influence peoples perception of the value

1 Like

A strategy is a decision that leverages resources, one example where we leveraged test automation turned out like this:

We were building an API that is an XML broker: stuff in, stuff out between two other systems. I thought it would simply be a matter of automating the acceptance criteria, format validation, and sequencing. We gave the developers all the tools in the book, yet for … cultural?.. reasons they were never picked up. Now we have a whole suite of manual test cases in Postman collections that verify response codes and the like. The test automation became too invisible and intangible.

4 Likes

In my experience… no one cared, lol. Some of the typical responses:

  • “we’re too small, we don’t need it”
  • got an automatic approval cause nobody bothered to read it, let a lone suggest improvements
  • “we don’t need it cause we’re agile, things can change in an instant. Test Strategy is only useful in a waterfall project”

Here I do not make distinction whether it’s (Project) Test Strategy or more narrow Test Automation Strategy that OP is refering to, the fact of the matter is that on small(er) teams and projects sometimes a full-blown document for either might indeed be an overshot. A good laid Test Plan might be more appropriate, especially if a team only has 1 QA which is often the case.

3 Likes

A few examples:

  • Previous failures with automation in testing. If you fail once, it’s fine, try again carefully. But then in some places there was failure after failure. At some point the managers will stay stop, regardless of how you try to spin it.
  • The long term cost put in contrast with what would be the cost of failure and what could the development team deliver in terms of business revenue given the automation budget; In one case I estimated the potential failure cost to 30k/year, while the automation budget was 200k/year.
  • Personal preferences from approver/s. They would want a specific process, a specific layer, a specific tool, coverage, … and then the strategy has to be built around that. They might not be explicit with it from the start, so you’d have to propose, guess, pull out info from them until there’s some sort of reasonable agreement.
  • Decreasing team resources for the initial setup(6-18 months), which means a decrease in overall business value and increase in costs, pressure from stakeholders to fix the process and get things done and in prod asap.
3 Likes

Or, in my case, they will just ignore it, learn that red color for failed tests doesn’t mean anything and just go on with CI/CD release processes as per usual as nothing happened.

I’d wager this is a common theme for majority of automations - there’s myriad of reasons why a build / test run would fail, for objective reasons, even though tests might have been successfully executed and passing. Or even if they’re failing, sometimes you know why and it’s nothing you can do, like that-env-var-that-devops-need-to-fix or whatnot. You could choose either to skip them (but risk forgetting them to unskip them once the issue is fixed) or keep them failing to always be aware of it but like mentioned above, it can trigger the “red is the new green” mindset.

It takes tremendous effort by a lot of people to have smooth and continuously running automation pipeline.

2 Likes

Well, that’s only a small thing and the least important for the management.
I’d say that if it fails then it does its job quite well, as software is unstable most of the time.
Getting failures more useful than never failing. When it’s never failing you might be wondering if is it checking the right thing, and if maybe the resources should be spent in places where chances are higher to get failures.

A talk by Wayne Roseberry on flakiness management at Microsoft:

What the leaders/managers usually want to know and pay for is:

  • Have we caught more important bugs sooner using automation or are we still releasing plenty of issues in prod?
  • Can we release with confidence now, by relying on less testing and more automation?

Do you have C-level management or middle management in mind? Or do you think it makes no difference?

My real question is, though: how involved a management actually is, or should be, when it comes down to test automation? From my understanding, automation is just one part of an internal team’s/project’s organisation.

Management care about team performance from a high-level perspective, team/project deliverables, resources and timelines. Test automation comes on their “plate” only when some budget is affected and they usually do not mingle in day-to-day operations or plans of a team.