How has context contributed to successes and failures in delivering valuable automation?

@friendlytester and I need your help. We’re building a new Automation course (based on the Automation Curriculum) and we want to add ideas, tips and stories from the community to help teach topics. We’ll be curating this thread and adding parts into the course. So what we’d like to know from community members is:

How has your context contributed to successes and failures in delivering valuable automation?

For example, one experience I had involved our vice-president telling us to use a Cucumber-JS framework which was broken and only ran five automated checks. It meant that running our checks was a complete nightmare and we spent days trying to fix the issue.


I blogged about a near failure here:


I have a success story:

Some years ago we overhauled our product which made all existing UI automation useless.
While the old approach was to check the whole system (like data processing, business logic) BY the client we now made a differentiation:

  1. The system/server/business check automation is done via interfaces
  2. The client check automation is “only” about the UI elements. Focusing at that different parts can be accessed at all and are displayed (and just superficial on the integration with the server).

There is a dependency: to access most parts of our client processed data is needed.
Therefore the UI-automation always run after the interface-system-automation against the same server and database.
By that the interface-automation works also data preparation for the UI-automation

How we test that different UI functions are not broken, which are not checked?
By using it, for different purposes.
This is also part of multi-level test process where different groups test our product.

The context was that I could convince management for the second part instead of trying the old approach again or dropping client checks completely.

1 Like

Here is something I noticed on one project I was on a few years ago. They had a big product with a lot of legacy code and different enterprise systems talking to each other. The only automation, apart from the unit tests (there were no integration tests) was a Selenium UI framework.

The people who built it were really smart test architects and the mid-level automation engineers who maintained the framework were all excellent programmers. At this time I discovered the AB Testing podcast and after hearing some of the UI automation horror stories, I began noticing that those similar scary stories were happening on the project I was on! :ghost:

It often got me wondering why are these smart people spending more than 80% of the sprint debugging and fixing failed Selenium tests. Lof of the UI tests went into deeper checking of the business logic, while at the same time, a new well-architectured REST API was released as the new backend. The amount of UI test could have been reduced drastically, by simply testing for most of that stuff on the API level. That would have probably saved the company a lot of money in the long run and those automation engineers would have had fewer grey heirs! :sweat_smile:

Eventually, most of the automated checking was moved to a new API automation team, and as a result test flakiness was reduced drastically, there were fewer bugs and outages being detected too late, and the automation team was reduced in size by half, and those people were finally able to move to other projects - as that project at the time was a top priority so all the best and brightest were transferred to it, for a while.

So in this context, different types of automation delivered completely different results.

Sorry for the long tirade.


I wish I could do something other than UI automation with the software I test. Not only does it not have an API, it’s going to be sunsetted in the next 2 years or so.

What I’ve built is bare-bones UI automation that covers essential functionality so I know that hasn’t been broken by any changes. It’s flaky, it’s nasty, it’s overly complex… but it does the job.

When you have to keep a creaky but effective classic ASP site running, you don’t have many choices. VBScript is not exactly amenable to unit testing, and having a REST API - or any API for that matter - as a backend is a wistful dream.

Such is life.

1 Like

These examples are fantastic. I wonder if anyone can share examples that are less to do with how the product was built and more related to other factors of context. I’m thinking team changes, or project deadlines, etc. that might have impacted automation efforts?


Worked somewhere where the test automation was in Java Selenium but the code was in PHP and JavaScript. So Test Manager wanted the test code to be in JavaScript to align with the frontend code.

However none of the QA Engineers had experience with programming in JavaScript. Meaning the migration of tests took a long time. Also the code delivered was flaky and not optimal.

Just wanted to say thank you for sharing these stories. We’ll be adding them as stories from the testing community into our learning journey to help underline points we’re making within the lesson.

If for any reason you don’t feel comfortable adding your story, or have any questions don’t hesitate to message myself or @friendlytester.