What’s the biggest automation mistake you’ve seen teams repeat again and again?

Automation is primarily focused on speed and certainty. However, in reality, many teams suffer from unreliable tests, false alarms, and high maintenance expenses due to technology usage.

We could either be automating the wrong steps or places, or even worse, we may not have a good understanding of the product or risks involved.

The purpose of this question is not to blame the tools or the people.
It is rather to reveal the actual lessons learned:

What was the problem?
What was the reason?
What would you change in your approach now?

Eager to hear the honest truths from the battlefield :backhand_index_pointing_down:

I think the biggest problem is to build a good, stable working framework.

Never trust companies who can build it for you, most of the time you need to develop your own libraries (aka Keywords) to talk to your components under test, This requiring programming and scripting skills .

I’m not even talking about CI/CD.

It always takes longer than you think

It costed me 1-2 months to build a very basic robot framework for system test, but I already had experience in another product were we used it for more than 4 years.

The old framework was python based, and I already had experience as a component tester and had knowledge in library development.

2 Likes

The idea that it in any ways replicates or replaces hands on discovery, investigative experimental testing.

This often happened because managers and some testers had a very mechanical view of testing, test cases, scripted testing, known risk focus and it seemed like a straight forward switch from this to automation. What it often missed out on was the tester was doing a whole lot more as they tested.

I recommend to take the time looking at their testing model, finding out what sort of testing model suits their work. If the activity favours mechanical strengths leverage from them, if it favours human strengths though like discovery, investigation and experimentation do not try and hand that to a mech heavy solution.

Build a matrix of human, wet biological brain strength activities and mechanical dry brain activity strengths, decide what problems you are looking to solve and and find a suitable match.

4 Likes

Sorry, I know this is going to be an unpopular opinion but the number one thing I have been seeing/learning the last few years in the several companies I’ve worked at is that there is always a focus on devs being able to write UI automated tests/frameworks, and nowhere near enough focus on making it accessible to QAs.
Unless QAs have prior programming experience, they often find it difficult to learn automated testing. I find a lot of this is down to overly complicated frameworks with difficult languages and too many layers of abstraction that are simply unnecessary for test automation in my opinion.

I worked with 3 amazing intern developers once and had the task of on-boarding them into our automation project (temporarily as part of their course). They started out by working on their own project as a group to learn the basics of Selenium. They grasped these concepts very quickly and wrote many solid tests which they were able to build and execute consistently, but as soon as they tried to write tests within our project it all fell apart.
I started making diagrams to help them visualise all the moving parts of the project and that’s when I realised we had something like 5 abstraction layers (we were using C# with Selenium, SpecFlow and the Page Object Model…)
So here we had 3 very talented developers who couldn’t grasp our framework at all. It also took me, a senior automation tester, far too long to fully get to grips with it due to its complexity (I had never worked with SpecFlow before either).

For my next project I focused on training the QAs and making the automated test scripts as close to manual test scripts as possible. I ditched the Page Object Model and organised my tests and “steps” by features and user journeys. I chose a simple but powerful scripting language to achieve beautiful looking automated tests that read like real test scripts.

I started the QA team with the basics of logging into an application with Selenium, wrapping these statements up into re-usable functions that were stored in the test file, then moving the functions into separate files when other tests needed to use them. Finally I introduced the concept of classes to compliment the scripts.
This simplicity allowed the QAs to focus on automated test best practices rather than difficult programming concepts. They understood how to work with browser behaviour to stop test flakiness. They understood how to build solid and concise element selectors. They understood how to write independent tests with excellent bug-catching capabilities.
What they didn’t need to spend time on was design patterns, abstraction layers, compilation (it’s so nice to just hit Ctrl + S and then run your tests).

If all UI automation test projects looked like manual black-box testing, and if all QAs were able to contribute to them, why would you need the developers to help? Automation engineers aren’t expected to assist developers with their work, so I find it odd that the reverse is not true.

I will stress that my opinion applies to UI testing frameworks where the focus is black-box testing, though we had full SQL integration too. Basically anything that a typical .Net project could do was do-able in the above. I think test automation needs to be approached from a test perspective and not a development perspective.

2 Likes

In my observation, designing tests too simplistically making them more on the brittle side, and not maximizing on code reuse.

Implementation (support) focus was given to just test simple/happy/golden path case and being able to handle other/edge cases was not considered in the automation logic. That focus on test case side is fine, but the automation support on the back end done that way made it more work to build the missing support in future when it was needed.

Also the simple path implementation missed out on designing in code re-use to minimize boilerplate coding and copy/pasting across test cases and test codebase. For more powerful, flexible, reliable test automation (framework) sometimes you have to think farther ahead than just quickly automating the immediate test cases at hand. That fast path approach works initially to meet deadlines, but over time leads to technical debt, and makes the framework less flexible for use as time goes on by being too restrictive and simplistic in the tests you can do without adding way more code than necessary or refactoring.

Focusing on E2E tests.

There are some benefits no doubt but I’ve rarely seen proper functional defects get caught by them. Integration, component and unit tests are way more powerful, effective and cheaper when it comes to catching errors that properly impact users early.

Part of this is biased by the fact that I’ve largely worked with super complicated systems where it was really hard to be deterministic E2E. People would often spend weeks and weeks working on a handful of E2E tests with simulator, client and pipelines changes.

I’ll be cheeky and add another big mistake:

Confusing green pipelines for quality software

A green pipeline means that the things you know to check for are very likely working. It doesn’t tell you that the second the user does something unexpected, everything will come tumbling down. Basically using automation as your testing strategy rather than one of the many parts of it.

6 Likes

@andrewkelly2555 has got the biggest mistake I’ve experienced. Outside QA, those stakeholders that are pushing for more automation often look at the automated test packs with quantative eyes. i.e. they’ve got 500 automated tests, it must be good.

I’ve said before we had a product that was prone to support tickets on releases and we had an old framework of over 150 test cases. I had concerns as it never found anything wrong, but those outside were comforted that we had “a lot of automated tests”. I pressed harder with engineers and found it to be testing the process but not the outcome. So we dumped all the tests and created 10 automated tests in new framework that focused on the outcome ensuring they were in line with production. We found more issues in 1 run than we did in a year of the old framework.

So you need to measure the effectiveness of your testing which is something I find fascinating. So these are my measures:

  • How many tests do you execute before a bug is found per product?
  • Whats your support ticket rate per product?
  • Whats your fix patch rate per product?

When you compare those against each other, you’ll start to see how effective your test approach is.

2 Likes

Aiming to replace every single manual test case with an automated version, or even setting that as an objective!

2 Likes

The biggest problem is starting automation activity without asking the questions:

  • Which problem are we going to solve by adding automated testing code?
  • Why do we need automated tests?
  • Who will be responsible for writing and maintaining tests?
  • How and where will we run the tests?
  • What are the expectations of stakeholders from automated tests?

By rushing into coding/vibecoding, we can easily reach a point where we have generated hundreds of tests, but no one cares that they are red, and no one has time to fix them.

So effort is spent, but there is no value from it.

Ignoring the automation results, letting failures repeat every run without investigation.

None.

At least not “again and again”, that would imply a failure to fail. I kind of agree with @oxygenaddict on the E2E over-reliance, although I see it as a question of “altitude” too. If you automate tests or even execute tests in units or components or layers you do get brilliant insight into regressions early enough to prevent escape, but not when integrations are your bread and butter. And I mean internal integrations between teams that often silo themselves by being in different geographies or different orgs from a revenue and management “altitude” perspective. So there is that. I kind of prefer some altitude in my testing at times, as a way to spot black swans though.

The only thing I so see repeating is test rot.

That new year resolution to write loads of unit tests (which were probably more component tests really) and then 3 years later you look and see that there is no changes been committed to the tests folder tree because we changed out CI and although the tests still all pass, the new CI system no longer surfaces failures. Or worse, well just use your imagination, the land of excuses. A land that extends as deep as things like Joe left, and Joe always used to TDD so he always added unit tests.

And probably Quality ownership changes.

Every time there is a drive to make someone different responsible for Quality. @whitenoise I feel your pain. Robust and simple dashboards is where I want to see investment. Maybe someone will invent an AI agent that takes on the job of quality champion. Someone who quietly builds an argument for automation without being prescriptive, and champions the work by recognising good quality work when they see it. By seeing the good, and holding it up, we let the bad die of neglect.