Successful automation in testing planning seems complex, how do you manage?

I’ve been through several automation projects, or in teams/companies where automation was built.
The coding part seems to be the easiest, and people jump right into it, even if it’s slower in the start, coding can be learned.
One usually gets this request:

  • I want us to have full coverage with automation that tests the new service APIs - once the APIs implementation is done, the automation should also be almost done;
  • The product has constant bugs in production. We need UI automation to check our product so we catch those before we release. 1 automation engineer and/or a tester should suffice to do it in 6-12 months.
  • We need UI automation to monitor the health of our product, detect issues before our clients do.
  • We need to reduce the cost of testing, so we’ll have to implement UI automation for most of the test scripts.

But planning seems awful. Here’s why I think that:

  • you have a deadline
  • you have fixed resources, or even reduced with time as the automation product increases in size;
  • you have an unstable product to check against;
  • you have no infrastructure;
  • you have constraints regarding tooling;
  • you have no stable data, or even no data;
  • you have to understand the business, the use cases, the potential failures/risks to add the check in the appropriate place;
  • you have to scale for long-term maintenance and reduced failure;
  • you can’t get extra help or motivate extra resources due to not bringing any business value or revenue;
  • you get very little or no help from development or business;
  • you get a promise from management and dev team - everyone will start to do it(add checks for new code, maintain existing ones) once most of the code is implemented; I haven’t seen this yet.

It seems like you’d have to develop a new product on top of the live product without much of the benefits from there.
What’s worse is that every few years a major product change happens so most of the automation gets trashed.

I wonder:

  • Does anyone have regular meetings and conversations with the team and management on this topic?
  • How do you balance all the tasks and roles in this new product - to be successful in the long term?
  • Do you set a strict concrete plan from the start or wing it, agree on something generic, and do whatever the manager asks?

Neither of them. At my current company we have the freedom to suggest and do automation as we, developers and testers, see it necessary and reasonable. Bottom-up.
I have develop multiples tools for testing without asking for permission as I saw them time-saving compared to to everything manually.

Managers setting specific automation targets is to me a dysfunction.
This is micro-management and a sign of distrust.


Its pointless to begin automation with an unstable app.
The popular goal for automation is to save time and effort in regression. I’ve never understood the race against time when it comes to automation in words like “we should write tests for XYZ before the development work is done”
That’s just a mantra from the books and not for the real world, in the real world its always better to make sure the damned thing works before even thinking about writing tests for it.


I somewhat agree and try to push for this until the point when I’m labeled useless for not listening to management, and almost getting fired.

Here are some cases where it isn’t that easy to say ‘app is unstable’:
What if you’re new somewhere and you’re given this task?
What if the belief is that it’s stable?
What if others don’t care and argue that you should be doing your best because that’s why they hired you?
And what if it’s somewhat unstable, would you never add automation to it?
What if the instability comes from the hidden things that no one is aware of that you detect while automating?
What if the instability is in a certain layer only that hasn’t been noticed?
What if instability means that the automated script fails once in a while? and you see it only later.
What if the instability comes from others manipulating the environment you try to automate in because it might be shared(they deploy, refresh data, add/remove data, configure the backend…etc.)

Here’s a couple of cases where a manager might want this:

  • there are external services to be integrated that are unstable and change unexpectedly; the product using them would want maybe to be informed about the changes, besides direct crashes in production;
  • several teams are working in parallel; each product/service/app/component/sub-system needs to be tested until the point of merge; by adding automation, which should be ready by the time development of each component is done, some believe that an unannounced change is notifying the other teams;
  • a product is going in maintenance mode after launch; the team wants some monitoring/automated checking of the product for when it’s live;
  • a development team is doing consistently hidden technical changes, fixing, refactoring, updating libraries; the changes don’t touch in theory any actual functionality of the product; but they would feel confident in their changes if several layers of automation would be available;
1 Like

Amazing, a lucky workplace. I’ve yet to see a company where management doesn’t ask for automation for the purpose of cost reduction in regression checking, and ‘reducing’ production released bugs.

I’m more keen to do automation, build tools and scripts when the automation is not demanded and instead I do it when and where I see fit.

1 Like

It needs some honesty from technical resources - no matter the other claimed benefits, at least one stakeholder will want to know that automation is saving money.
If the maintenance costs are unexpected that’s going to upset people - just like cars need services and MOTs, so do automated tests.
If you can’t afford the maintenance it’s debateable whether you should even start scripting.
Maybe the scripts are quick and dirty and pay for themselves even with two executions but are not for long-term usage? Make that clear to team and agree objectives and assumptions - mainly to cover your backside I’m afraid!

Assuming the scenarios around requirements are unchanged, one would assume the automation to reflect the same.

One of the only environments that I’ve seen where the automation worked kind of successfully implied:

  • ~12.000 automated large scenarios at UI level, developed by about 6 people over 10-15 years.
  • a team of 3 automation team engineers maintaining the framework, the environment, data refresh, and adding features;
  • a team of 4-5 testers each owning a subsystem and each checking results of 1-3k scenarios, fixing some of them, daily for 2-7 hours;
  • the automation at UI level was fully on the testers/automation engineers - no involvement from development;
  • failure of checks was known and acceptable(~1-3%);
  • the testers then would add automation for the rest of the available time;
  • sometimes they had time to also test;
1 Like

Do you have a concrete example where things worked out on a major project across multiple years where you can underline your suggestions?

Moderators Comment and Off Topic.

Hi @ipstefan. Thanks for the flag. I’ve removed the spam and removed the spam user.

Anything that is planned to take more than 6 months is a huge risk, especially if features are shipping sooner than 6 months? That’s just daft to start work on a thing when there will be 2 new things added before you finish thing #1. Any big framework you build needs to deliver value much sooner or else don’t even start the rewrite, it will just get canned or end up in endless limbo of never being “complete” and paralysed.

I am myself staring at 2 big pieces of work right now that need starting and one in progress, I’m chipping away at the in-progress and the next 2 are : performance testing tooling and Chromebook support. Both are in paper only phase, probably 2 months minimum and I aim to get management to prioritise them before I write any prototype code at all.