Automation - What are the factors that affect your test automation strategy?

I wrote a short blog post yesterday about how your automation strategy needs to change with your product lifecycle…e.g. when you’re building/maintenance mode/deprecate… Maybe it’s obvious, however, in my experience I’ve seen a lot of automation that is not fit for purpose.

FYI here if you want more of a read… Your Automation Strategy needs to change alongside the product lifecycle | by Melissa Fisher | May, 2023 | Medium

It feels this may be a good topic of conversation…

  • Does your team currently change your automation strategy based on the life cycle of the product?
  • Are there other factors that affect your automation strategy?
  • How often do you review the automation you have in place?
  • What would trigger a review?
    • Add any other questions we might like to think about…
5 Likes

Interested to see what kind of responses you get Melissa.

2 Likes

@melissafisher Perhaps it is good to clarify what you mean by automation strategy.

It can be (at least):

  • The approach to automation for a single team
  • The approach to automation for a project, also including things that are common between the teams
  • The enterprise approach to automation, also including things like hiring policy, outsourcing, and contract management

What do you have in mind?

1 Like

Yes for me here, I can think of 3 examples off the top.

  1. When working a few years back on a project where we were building our our automation framework, there were certain areas of the system that the team was not planning to ever change. The product team had decided that we would not be adding any additional features to this module, it was good enough for our customers for the future. For this area of the system we only added 1 or 2 happy path tests, and focused our tests on other areas of the system that we continued to update.
  2. On my current team, it was decided around 9 months ago that we would be sunsetting our current UI project in favor of rebuilding our front end from a template + creating everything via shared components. This means that we decided to not invest in adding more coverage to our old project. we only had around 18 checks 6 or so of those were around login and the others were critical paths of the system.
  3. Now that we have the new UI that was built out our strategy wasn’t to go immediately ā€˜automate all the things’ we decided to take a slower approach, rather that stay in step with the developers (where things were changing very often), we decided to lag behind 1-3+ months of their development. As the areas of the system became stable and we started onboarding customers to the new UI we then began to add automation. This was really helpful for us, as it allowed the devs to work out what was needed for different components without us going crazy trying to automate every little change. During this time we focused on adding more coverage to our API automation coverage and exploring the changes the devs made, giving them quick feedback along the way.

Following up from my examples one big thing I try and understand is will the product have major or minor changes in the near future, if major, I may hold off on adding a lot of automation as it will be short term value, and will more than likely have to be heavily refactored or thrown away.

A followup question for others:

  • How often is the current or updated automation strategy shared with the developers/product team and how/where do you communicate this?
2 Likes

That alone is very interesting - the different levels of automation. I had nothing in particular in mind. Open to where the conversation takes us.

I’d be interested to hear a bit more detail about the first two from your perspective.

I’d be interested to hear a bit more detail about the first two from your perspective.

In my role, I consider all levels and areas of automation strategy. And writing a book about it has helped and is helping me (not done yet!) to get things clearer for myself and also to explain them more clearly to others than I could before.

Now certain topics (especially those of a more operational nature) will usually be addressed (to some extent). Service virtualization, test data needs, a suitable tool, and using design patterns such as page objects, for example. But there are two things that I stress because they are not only fundamental but very often overlooked (applicable to all three levels in my previous post):

  1. The value of the automation is going to be mediocre at best without the right goal:
  • No (clear) goal gives the Nike approach to automation: Just do it. It will probably have some value, but not much: A team or individual cannot aim for high value from automation if it is unclear what constitutes high value.
  • A poor goal like 100% regression test coverage may seem useful but is not. You have something to aim for, that is helpful in a way, but again the value will be limited because it does not help make decisions well. There generally is little alignment with business objectives and thus little business value. (This becomes quite clear if you consider how the business would respond to the goal being achieved. Will they party deep into the night or just glaze over because they see no relation to what is important to them?)
  • A business-oriented goal ensures that the business understands what automation in testing is aiming for, and thus alignment with them. It also enables making good strategic, tactical and operational decisions regarding automation.

I know only five truly business oriented main goals for automation. And each of them should explicitly be combined with (technical) sustainability as a goal, as not having sustainability as a goal often leads to something that is not sustainable.

  1. Without an overview of the field of automation in testing, how will you know if all the important topics have been addressed? While it is not always necessarily a good idea to cover each topic in depth from the start, it is always good to determine what you want to consider when without missing anything.

For this purpose, in my work I use the overview of the field that presented itself to me during my writing. If you do it all at the start, you effectively create the vision that will achieve the goal. The actions to get there can then be planned in the usual way: roadmap, backlog, …

Your thoughts?

1 Like

I need to add few points that are

  1. flaky automated test cases that drag back automation team time
  2. test environment not ready
  3. In-depth test validations not available
1 Like

Out of interest, what do you see these 5 to be?

2 Likes

@theology Thanks for sharing your points. How should these affect an automation strategy?

@testerfromliverpool

Out of interest, what do you see these 5 to be?

They are hardly rocket science, actually:

  • Cost (not so often any more)
  • Quality
  • Three flavours of speed:
    • Efficiency: Getting a single change (feature or fix) to production ASAP
    • Productivity: Getting the most work done in a fixed amount of time
    • Fast feedback: How fast can a dev know he can pick up the next task/story (in Agile/DevOps)

Most of the goals that are usually mentioned can be objectives under one of these rather than the goal. But the above goals are much more meaningful to the business. They are also better guides when making decisions. This includes knowing where to stop or when to adjust some part of the strategy (including the goal itself).

Recently I have been thinking that, with the shortages on the job market, tester happiness to prevent them from leaving or even to attract new testers could be a sixth one. The business cannot deliver without staff to do the work, of course. Still, that seems more of a resource issue that would be a factor under one (or more) of the five above …

2 Likes

I’m glad you mentioned that ā€œ6thā€ - I agree with that and it relates to me.

Being honest, in my workplace, I’m not certain the automation (certainly the UI level automation anyway) could be fully justified. We have slower release cycles and regression testing doesn’t take a huge amount of time.

In 2 projects it is a core part of the testing approach, but on others we do it as a bit of a ā€œside projectā€ and in part this is to try and give the opportunity to the testers. Also regression testing is the boring stuff, so if we can cover some of it with the automated checks then great.

1 Like

@testerfromliverpool

I am thinking two things after reading your post:

  • Automation to give the testers something fun to do is nice. But I do hope (and expect) there is also business value to be had there. Keeping testers happy is rarely the main goal for automation, which is what my post was about, but there may be situations where it is. I just have not come across any so far; when I am brought in it is always because the automation needs to support a clear business objective - either quality or speed. Would love to hear from someone who has seen this!
  • Automation not being very effective because it is not done ā€˜properly’ is, unfortunately, rather common. (Its only advantage being that it provides consultants like me with plenty of work …) Making it effective (and sustainable) is what the automation strategy is for, if it is to serve business objectives. That is what the two steps I mentioned in an earlier post were about. And automation mainly through the UI is a great example of not following those steps - which happens A LOT. Few people would work that way for the SUT, but for automation it is fine even though it is also code? Not taking it seriously enough has caused plenty of headaches. Which is one reason why a community such as MoT and this forum are so important: to inform us better so we can do better.
1 Like

We review our automation status about once a month. There is a bi-weekly meeting discussing progress and blockers. As the product is going into maintenance mode there have been changes to the strategy, like switching work from expanding tests to making them stable as they are.

I would like to add an other factor that has affected our automation strategy. In the team there was a bias against automation due to a previous failed project. So our strategy had to take that into account (even though they had no say in whether it was happening). We chose to start with a set of spikes to show that it would work. There was also a focus on easy wins over important tests. After we had support we could tackle the hard stuff.

3 Likes

@sles12 Hi Anna.

Reviewing the automation status regularly is very practical, also for cleaning up old stuff. That the automation strategy was adjusted with the stage in the SUT lifecycle is also great to read.

The human factor is important in many areas of the strategy. Previous experiences and biases of the team and other stakeholders are one side of it. Another is culture in general: A blame culture is not helpful for trying new things, for example. Neither is a PO who pushes a team that is not mature to be a feature factory and not work on technical debt (stories). And when choosing tool and programming language, the available automation skill should be considered, including that of any developers that can either also automate or coach the automators. Working on the strategy is fun!

Taking small steps is useful in many situations. The effort for each step is small but there is always a tangible result and then a decision regarding the next step. I am a great fan of this approach.

I love your icon/avatar choice @sles12 . Been reading some of your previous also insightful inputs, please keep these coming. I really like the idea of a monthly stepping back and reviewing of the status. Not from a purely meetings-are-fun perspective, but from a getting a plan of record that delivers in place kind of way. What roles are typically in that meeting?

2 Likes