Can you share an examples of good/bad automation goals

Back again with another question. This time Iā€™m thinking about automation goals and what makes an automation goal good or bad.

So, for example, a good automation goal might be: Reduce the time to run automated checks by 20% over two months and a bad goal might be Automate all test cases for features

These are just examples, but Iā€™d like to pose the question to you all:

Can you share an example of a good or bad automation goal

As always, I look forward to your answers and thank you for your contribution :robot:

4 Likes

When I first joined my company as one of the first Quality Engineers, this was also my first IT Job and Iā€™m only 2 years in now, this was the goal we were givenā€¦

(Bad) ā€˜Automate Everythingā€™ and so we did. We automated the crap out of our UI using selenium. Created this massive library that any team could grab and utilize. Then the cracks began to show. A lot of the automation wasnā€™t giving any real data. The library itself was flakey and unstable. Most teams couldnā€™t run their automation in the CI/CD Pipelines. We automated everything but with no purpose, no direction and no plan. Tests would fail and there would be no actionable take-a-ways.

Now that Iā€™ve learned and read a lot. I helped build out a new frameworks to make it more accessible and actionable.

(Good) ā€˜Make Automation Actionableā€™ - Now when we run our tests, locally or in the CI/CD Pipeline. We get logs, we get score cards, we get a ā€œx thing brokeā€ and devs and QEs can jump in and find the issue and resolve it quickly. So we build tests and frameworks with a ā€œWhat is the purpose, whatā€™s the end goal?ā€ and ā€œHow do we make it actionable?ā€. By following this weā€™ve been able to communicate things that should be automated and things should not be.

Speaking of What should or shouldnā€™t be automatedā€¦

(bad) Automate all the edge cases - we would also write so many different test cases to flush out all these wildly obscure and one off edge case scenarios that would affect the 0.01% of clients. And not focus on the core work flow. This took so much time and were so flakey as we werenā€™t following the work flow our clients did. It wasted a lot of time and added a lot of overhead to our builds.

We learned that most of these edge cases were either covered by unit tests or our QA can cover them by doing exploratory. Now if the edge case was discovered and was disastrous and took the system down. We would implement a fix and write automation around that piece, but we moved away from automating all the edge cases and focused on the Core Workflow.

4 Likes

Few bad ones that iā€™ve come across in my career -

  • Automation driving confidence in delivery, eg. ā€œour selenium/testcafe/playwright/cypress/nightwatch journey tests need to be in place so we have confidence in our releaseā€
  • 100% coverage as an unrealistic SDLC rule that is never enforced
  • ā€˜This one tool can do everythingā€™ approach.
  • Already mentioned in OP, but applying an automated test to every feature/scenario/AC as a rule

Some good ones

  • Regularly audit/monitor test automation to gauge relevance, eg. get rid of tests that cover removed functionality
  • Automation workshops - a way to propagate knowledge of automated tests (usually higher than service layer) across the team so writing/maintaining is not siloed with tester.

Iā€™m sure thereā€™s some more goodā€¦and way more badā€¦but the above have occurred many times across my career

2 Likes

While preparing to write a chapter on the topic of automation goals for my book on AiT (this book is not complete yet), I gave this considerable thought. I remembered a workshop by Dot Graham at EuroSTAR 2018 in which she differentiated between goals that were 1) testing goals rather than automation goals, 2) bad automation goals, and 3) good automation goals. This preparing helped me realise that it was good to distinguish between high-level goals and more detailed goals (or objectives).

I find it important to know the goal of having a goal. And it seems to be that the two main attributes it should have are:

  1. It should make sense to the business, so they will not only support it better but you also know you are doing something that they (will) value.
  2. It should guide (strategic, tactical, and operational) decisions relating to automation. Like when you go on vacation by car (without navigation): You keep your destination in mind at every intersection, look for the sign that features a city that is (or seems) in the right direction and you take that exit. Picking random exits rarely works out ā€¦

So the first thing you want is a business-oriented goal. Just one, but always explicitly combined with sustainability, of course. I only know five of these:

  • Fast feedback: How fast can devs know whether they broke something or can pick up the next story?
  • Efficiency: How fast can a single change (feature or bugfix) get to production?
  • Productivity: How much value can you deliver in a fixed period of time?
  • Quality
  • Cost (which is rather less popular as the main goal than 20 years ago).

To my knowledge, all other (useful) goals mentioned in this thread are subgoals of these:

  • ā€œReduce the time to run automated checks by 20% over two monthsā€ must be meant to support one of the time related goals. Which one, however, is not so clear, which suggests there is a goal that is more helpful in making the right choices.
  • ā€œAutomate all test cases for featuresā€ is quite vague and indeed a bad goal.
  • ā€œAutomate everythingā€ does not relate to value at all and is even worse.
  • ā€œMake automation actionableā€ is a characteristic that all automated checks should have, but does not tell me how it adds value to the development effort or the business.
  • ā€œAutomate all the edge casesā€ sounds like it aims for high quality, but does not suggest what to do with any of the other ways that automation can contribute to quality ā€¦
  • ā€œconfidence in the releaseā€? Why? What gives you that confidence? Lots of checks? High coverage? Lots of bugs found? Fast tests? Not very useful.
  • ā€œ100% coverageā€? Besides practical issues with it, again the question is: Why?
  • ā€œThis one tool can do everythingā€ does not relate to value at all.
  • ā€œMonitor test automation to gauge relevanceā€ is good practise, to be sure. But what is its value?
  • ā€œAutomation workshops / propagate knowledgeā€ is a means to an end, like automation itself not a goal in itself.

I am not sure why people so often end up with a goal that does not satisfy one or both of the two characteristics that I look for. Many can be related to a business-oriented goal when you think about it for just a second, and then become far more valuable and practical.

Or am I really as crazy as my mom believes I am? :grin:

3 Likes

We are all crazy, but thatā€™s another story.

Have 6-monthly or quarterly goals. Annual goals are anti-agile to the extreme. I have 3 automation goals for the quarter, and they reflect the way that having between 3 and 5 goals or objectives really boils down to good business.

  1. Ultimate Team goal: Deliver a release cadence of 4 releases per month. Becomes: deliver a release cadence of 2 releases per month in the short term. Requires some streamlining of processes and of CI/CD.
  2. Ultimate goal : Improve quality perception of the team. Becomes: Reduce post-release customer discovered bugs discovered in wild. With an action to grow test coverage intelligently to reduce bugs ā€œthat escapedā€. Sub actions made for fixing and surfacing various test reports.
  3. Ultimate goal : Address tech debt . This is a general engineering goal. Becomes upgrading test kit and expanding the amount of kit in ways that supports the above goals.

Measurable actions here are all around some metrics we already track for how often we do release, and how often we have to abort and to hotfix. Unfortunately we have had a few releases we had to pull or patch in a hurry, and the cost of patching is something we want to avoid. Not to mention the reputation damage. This is measured month by month, and really proves we can reach the 1st 2 goals. Objective #3 is a counterbalance goal that prevents us over rotating and just releasing fast with no long term sustainability in there.

Our previous 2 quarters were actually spent working out how to make the 1st 2 goals measurable and agree on them , see recent thread here DORA any experiences - #13 by conrad.connected .

2 Likes

We develop a B2B product with a big and complex client with many parts to access.
In the past it happend that the customer found some parts to be broken (e.g. showing an exception instead of a rendered table).
Also no human went, at least before a delivery, through all parts of the client and checked them.

Therefore I had last year (and still maintain it):

  • Check that most parts in our client are accessible at all.

This is NOT about checking any ā€œtest casesā€, functions, business logic, user journeyā€™s, etc.
The focus is on the client while being connect to a server.
The check of the server-side business logic is done by with checks via API and mostly develop by developers.

Runtime of client automation: 5 minutes
Runtime of server automation: 2 hours
:slight_smile:

Call it a check of integration of client and server (some errors were cause by server-side changes).
Call it smoke or sanity checks of the client ā€¦

You may stumbled over ā€œmostā€ and may ask ā€œWhy not all?ā€.
Some parts are very hard to reach by automation (e.g. having appropriate data, navigating (to) them). We agreed on to not spent the effort for them.
By rule of thumb I would say we cover 98% of the views of our client.

2 Likes

My golden rule: There is no such thing as flaky tests, only flaky people who implement bad tests.
If I see one of these tests in the pipeline, either refactor it or remove it and test it manually. I do not wish to spend time looking at false positives.

2 Likes

QA will automate everything through the UI, no need for API or unit testing :stuck_out_tongue_winking_eye:

Good:

  • Automate 90% of all new features and bugfixes.
  • Flakiness of 0% (where, say, 0% = run the entire suite 5x and everything passes).
  • Reduce CI time by 20%.
  • Improve test hermeticism by mocking out external service with local service (e.g. paypal sandbox, mailinator).
  • Build debugging tools X, Y, Z into testing framework.
  • Make it possible to test new types of currently untestable scenarios (emails, sending SMSes, calling REST APIs).

Bad:

  • Increase/maintain test coverage at x.
  • Automate all test cases for features.
  • Write N unit/integration/end to end tests.
1 Like

How deep should the 90% be automated? How many variants of the happy paths? Also error handling? Running all cases always for all different OSā€™s/Browsers/Devices?

Why 90%? Why not 85% or 95%?
What happens when you achieve less than that?

To me this always a case-by-case decision about effort and risk.

What is the difference to the 90%? I do not see a big one.

I like that one!

Hey

I might be little late to this discussion but here are some bad and good automation goals that I want to share:

Good Automation Goals

  • Increase the coverage of automated tests. This is a good goal because it will help to ensure that more of the software is tested automatically. This can lead to a reduction in the number of bugs that are found in the software after it is released.
  • Reduce the time it takes to run automated tests. This is a good goal because it will free up developersā€™ time so that they can focus on other tasks. It will also make it easier to get feedback on the software, as automated tests can be run more frequently.
  • Improve the quality of automated tests with testing tools. This is a good goal because it will help to ensure that the automated tests are accurate and reliable.

Bad Automation Goals

  • Automate all tests. This is a bad goal because it is not always possible or practical to automate all tests. Some tests may be too complex or time-consuming to automate, and some tests may simply not need to be automated.
  • Automate tests as quickly as possible. This is a bad goal because it can lead to the creation of low-quality automated tests. Automated tests should be created carefully and thoughtfully, with an eye to quality.
  • Automate tests without considering the business needs. This is a bad goal because it can lead to the creation of automated tests that are not aligned with the business needs. Automated tests should be created with the specific needs of the business in mind.

Overall, good automation goals are specific, measurable, achievable, relevant, and time-bound. They should also be aligned with the business needs. Bad automation goals are vague, unrealistic, or not aligned with the business needs.

Moderatorā€™s EDIT: Have removed promotional material of a testing service provider.

1 Like

How deep should the 90% be automated? How many variants of the happy paths? Also error handling? Running all cases always for all different OSā€™s/Browsers/Devices?

90% of all code paths - happy and unhappy - using the default profiles.

Why 90%? Why not 85% or 95%?

90% was a number I plucked out of the air. I usually start out with 10% (i.e. mostly happy paths, not all tickets) and move up. Frequently I hit 100% of all new user stories. The point wasnā€™t that you should aim for 90% in particular, but that aiming to automate a particular % of all new scenarios is a good goal.

It depends on the project and the skill of the person writing the automation test, but some edge cases are hard to replicate and not necessarily worth it. So, 100% may be a bad goal.

What is the difference to the 90%? I do not see a big one.

The difference is that using test coverage to make any kind decision is always a bad idea, no matter what the number.

1 Like

Thank you for sharing your goal examples. Weā€™ll be selecting some of what has been shared and adding them as stories from the testing community into our learning journey to help underline points weā€™re making within the lesson.

If for any reason you donā€™t feel comfortable adding your story, or have any questions donā€™t hesitate to message myself or @friendlytester.

2 Likes