What to automate?

(Heather) #1

@paulmaxwellwalters posted his reply to the recent Twitter poll by Trish Khoo in Testers.chat over the weekend.

I responded to him by saying:
“Personally I think knowing when automation should/shouldn’t be applied is as if not more valuable than knowing how to automate. Many automators don’t know that which is also damaging”

On the train on the way home I realized that I may fall in to my own group of not knowing what to automate.

I thought to myself if I was asked what would I automate in an interview I’d probably respond with

  • Unit tests
  • Checks at the API layer
  • Performance
  • A small amount of basic UI checks

I’m not really happy with that. It’s not a strong answer. What do I mean by basic? I’m surely missing things. I’m aware of the pyramid and ice cream cone images for automation but I still always feel like I’m missing something from them.

In my new job I’ve started with an approach of automate nothing until I figure out the product and the risks involved. I don’t see automation as something to do for the sake of saying “hey look we’ve an automation suite”. I think this is a good approach so far but I’m annoyed at myself. I want a better answer for what would I automate and why.

Can you help me? What would you automate and why?

30 Days of Testing, Day 24 & 25: What can & can't be automated
(Daniel) #2

You are on the mark of how to approach automated checking - tackle it from different levels (Pyramid), focus on only certain parts of the pyramid can lead to potential undetected errors.

What to Automate should also consider other aspects as well:

  • Risk to the business, look for tests / features that would cause the most pain if they failed.
  • Coverage, chose the features that automation would give the greatest increase in coverage. Sometimes automation is the only way to achieve a result (eg. some performance tests)
  • Effort decrease, tests that require lots of manual effort due to complexity or repetition, targeting these can potentially in a greater return on Investment.
  • Re-use, chose carefully based on how frequently the automation scripts would need to be modified to keep them in alignment with the application under test.
  • break frequency - target features that have failed frequently in the past or are most likely to break i the future (experimental technologies, custom code based on out of the box classes)

In terms of knowing when to apply automation be realistic about its ability to deliver value:

  • Is it a new tool (greenfield), hence may need to be experimented with first to determine its suitability to the application under test.
  • Does the tool help achieve testings objectives.

(David Shute) #3

My biggest problem with “a small amount of UI checks” is that the things that are easy and obvious to automate end up being the things that are implicitly checked every time someone performs basic manual testing.

How many UI automation suites being with a login check?

Common UI checks, in my experience, are the things that are frequently used and easily surfaced. These things end up being individual pieces that are typically integral to the standard work paths.

The valuable paths in UI automation always seem to be the corner cases. Which then lose value due to the amount of work required to script out those functionalities and keep them update to date along with the product.

I have a lot of difficulty justifying any UI automation efforts. Perhaps if I had dedicated experts who were very effective at writing UI automation I might feel differently. I don’t have anyone working with me that has that level of technical skill.

(Chris) #4

The bulleted list is a good start for the types of tests to consider.

I’d also add load in addition to performance and where possible, integration.

I’m currently testing an order processing system which exposes a series of services. I primarily use SoapUi for functional tests and performance. I also have suites that test frequent scenarios, e.g stock check, ordering, product from the right warehouse etc.

The org I am working at use ESB so cosuming apps cannot directly call the API I work on. Instead consuming apps go through ESB, which acts like a thin wrapper around ‘our’ API. With that, I also run the same tests through ESB, though the implementation is different.

With the tests at both layers, I can use performance tests to check speed of our API and then the overhead going through ESB incurs. I can use the perf results as a baseline when moving onto load testing.

(conrad) #5

A bit late to this thread, but as an automator moving from system level testing to the component level I have found it the more satisfying. So which school of testing do I belong to? @heather_reid , I believe that by looking at risk when automating, you are going for the high value stuff. I call that the Risk-based strategy school, and for me UI automation is on the opposite side of the automation scales, the risk that I spend more time debugging UI automation than actually testing is so high, that we leave it till the very last too.

  • Unit tests => the developers should write and run these anyway while coding , not normally the “testers” job.
  • API Checks => Highest value for me here
  • Performance => Needs a specific mindset - try separate it completely to a timebox or role of it’s own
  • Basic UI checks = > Very basic is best, limit to one or two checks, login and more importantly logout (you are doing data security arent you)
    Normally if your app is broken - its’ either going to not get past the logout properly, while if the UI is broken, the more UI tests you write, the more pain you induce, catch this at the API level - because in reality your application and your customers are consuming the API. More and more apps these days are API based in reality.

“Know what not to automate”.

Adding that to my quotes library.

(Chris) #6


“Know what not to automate”, that’s a really good point.

(Juan) #7

Nice list!

I would invest most efforts in unit testing. It should be done by the developers but we, the testers, should discuss with the development team about what to automate and how to do it. Many many times the development team has no idea about it so we need to coach, teach, evangelize, etc…the development team.

  • One example. I´m working in a Mesos environment where many applications (called frameworks) needs to “speak” with Mesos (a computer cluster “OS”). Many of the unit test to automate would need to mock the Mesos API calls and this would be difficult and very, very costly (in time and effort). So I prefer to avoid the “mocking” and “convert” these unit tests to integration tests so we don´t mock the Mesos API calls and use a real Mesos environment instead. I know the coverage would be affected but we can´t invest more time and effort in creating the tests than the code!!!

  • Another example…In these environment we use the Akka framework. In some scenarios, the scala functions are “just” passing messages between them and so I choose not to automate this scenarios. Again the coverage would be affected, but why we should check a functionality that is provided by a third company product (in these case, the message passing functionality provided by Akka framework)?

There is no time to automate everything so the testers should point out the functionality to automate focusing on the product characteristics that bring the most value to the client.

  • API layer usually is a perfect goal to automate, specially the “REST” APIs.

  • UI testing, as been said, should be reduced it to the bare minimum (the interfaces has a tendency to change with every build and break the automation)

  • And performance…it´s a completely different world, it should be automated but it has nothing to do with the automation process used in the Unit/Integration/Acceptance…testing. It´s a completely different beast!

What do you think?

(Steve) #8

I am also late to the thread but it’s an interesting point.

It’s not a black and white decision as a lot of thinking needs to be applied to determine what is appropriate to automate in that particular instance. That in itself is a skill, which I believe is overlooked.
A great interview question would be to ask what tests a candidate has decided not to automate, and find out their thought processes.

The decision points should be:

  • Where to focus the effort - front end UI, back end API layer etc.
  • The types of tests needed - functional, non-functional.
  • The coverage needed - smoke tests for build, full regression for overnight runs etc
  • Defining the areas of greatest business risk.
  • Understanding the complexity of the tests and working out the ROI.
  • Deciding on the tools to use.
  • Reporting on the test results.
  • Ownership of the tests - is it just the tester or a whole team responsibility?
  • How much time is there to spend on automating?
  • And what about ongoing maintenance?

We seem to be in such a hurry and under pressure to just automate everything, but that is damaging and wasteful. For example, why would you automate a manual test that took 1 minute to do if it took 30 minutes to code and was a low priority scenario that wouldn’t be executed that often.

I don’t think we can come up with a defined list of what you should or should not automate, but we can come up with a set of useful questions and decision points to follow.

(conrad) #9

You can also look at it from a schools perspective. have different people look at 3 different angles, (2) Regression , (1) Risk-of-fire and lastly (3) Performance.
I would obviously look at regression strategies as the long term solution, but as a bucket to catch things when it all goes pear-shaped. (1) You want a person looking at the most risky areas, and divine a test that covers only those in an integration test (smoke test). It must only test the highest priority functionality, while still bringing every single integration and interface into play. Do not test anything that a salesman cannot show you in the first 5 minutes for the smoke test suite. This will prevent fires breaking out by detecting integration faults early and decisively. It needs to be designed to save time, and be the oracle for the health of your development process by being very quick to blow up after bad code gets dropped. Your fire prevention test need to be easy (low cost) to maintain, because it should almost never change, it’s your baseline for going back to when in a hurry, or when looking for the long view over the year.

(2) Integration tests become very expensive the longer they run for and the more mature they become, so steer back to something to mock the interfaces so that you can build a Component test suite to cover you for regressions. This will involve writing a generator that builds some of the mock layer perhaps. I would look at all the advice you have been given here like Steve above, and fold that into a regression testing strategy that tries to test everything. This is where you can play and change approach often. Regression testing has no silver bullet recipe, change the tools often, change priorities. This will be your biggest time sink, so come up with a way to calculate ROI on every bit of automation. If you are regression testing only in integration, developers will find ways to blame other components for bugs. The trap of regression testing with a full stack, not only slows testing and the feedback loop, but promotes bug tennis.

(3) Performance : (and scale) this is not the same as regression testing, and probably needs a completely separate team or person on it.

Use these 3 as tiers in the assembly-line.

(Dan) #10

Knowing what not to automate (as in a choice of what you wont automate), and knowing what you can’t automate (as in it’s impossible to automate) are 2 different things. You should ask both of these questions…

Knowing what you can’t automate should be easy to answer: You can’t automatically assert anything that you dont have an expectation for… For that, you need investigation through exploratory testing. This can be risks, other perspectives on properties and variables that are unknown or that we are unaware of (at the moment).
I’m still surprised how many people still struggle to understand this - and I’m talking about people who are skilled in writing automation scripts… It goes back to your point of knowing the theory behind automation being more valuable that knowing how to automate.

With the other question on knowing what you shouldn’t automate, this is a much harder question to answer as it completely depends on teh context in which the question is being asked. The expectations around risks and quality criteria from the stakeholders all play a part in forming the context for being able to answer this question.

(Heather) #11

a recent tweet from @bas reminded me of this post:

(Alastair) #12

Bas is really good in this area - focusing on automation strategy rather than the tools etc.

I recently attended a Meetup where Lee Crossley spoke - he suggested to automate the low-hanging fruit. If you’re wanting to start web service automation - start off with the ping/single user test so you can easily identify when your APIs are down.

If you’re starting with Performance testing - do a single user load test to ensure your expected response times are met.

(Heather) #13

A recent blog post from @katrinaclokie on this very topic :slight_smile: