The bulleted list is a good start for the types of tests to consider.
I’d also add load in addition to performance and where possible, integration.
I’m currently testing an order processing system which exposes a series of services. I primarily use SoapUi for functional tests and performance. I also have suites that test frequent scenarios, e.g stock check, ordering, product from the right warehouse etc.
The org I am working at use ESB so cosuming apps cannot directly call the API I work on. Instead consuming apps go through ESB, which acts like a thin wrapper around ‘our’ API. With that, I also run the same tests through ESB, though the implementation is different.
With the tests at both layers, I can use performance tests to check speed of our API and then the overhead going through ESB incurs. I can use the perf results as a baseline when moving onto load testing.
A bit late to this thread, but as an automator moving from system level testing to the component level I have found it the more satisfying. So which school of testing do I belong to? @heather_reid , I believe that by looking at risk when automating, you are going for the high value stuff. I call that the Risk-based strategy school, and for me UI automation is on the opposite side of the automation scales, the risk that I spend more time debugging UI automation than actually testing is so high, that we leave it till the very last too.
Unit tests => the developers should write and run these anyway while coding , not normally the “testers” job.
API Checks => Highest value for me here
Performance => Needs a specific mindset - try separate it completely to a timebox or role of it’s own
Basic UI checks = > Very basic is best, limit to one or two checks, login and more importantly logout (you are doing data security arent you)
Normally if your app is broken - its’ either going to not get past the logout properly, while if the UI is broken, the more UI tests you write, the more pain you induce, catch this at the API level - because in reality your application and your customers are consuming the API. More and more apps these days are API based in reality.
I would invest most efforts in unit testing. It should be done by the developers but we, the testers, should discuss with the development team about what to automate and how to do it. Many many times the development team has no idea about it so we need to coach, teach, evangelize, etc…the development team.
One example. I´m working in a Mesos environment where many applications (called frameworks) needs to “speak” with Mesos (a computer cluster “OS”). Many of the unit test to automate would need to mock the Mesos API calls and this would be difficult and very, very costly (in time and effort). So I prefer to avoid the “mocking” and “convert” these unit tests to integration tests so we don´t mock the Mesos API calls and use a real Mesos environment instead. I know the coverage would be affected but we can´t invest more time and effort in creating the tests than the code!!!
Another example…In these environment we use the Akka framework. In some scenarios, the scala functions are “just” passing messages between them and so I choose not to automate this scenarios. Again the coverage would be affected, but why we should check a functionality that is provided by a third company product (in these case, the message passing functionality provided by Akka framework)?
There is no time to automate everything so the testers should point out the functionality to automate focusing on the product characteristics that bring the most value to the client.
API layer usually is a perfect goal to automate, specially the “REST” APIs.
UI testing, as been said, should be reduced it to the bare minimum (the interfaces has a tendency to change with every build and break the automation)
And performance…it´s a completely different world, it should be automated but it has nothing to do with the automation process used in the Unit/Integration/Acceptance…testing. It´s a completely different beast!
I am also late to the thread but it’s an interesting point.
It’s not a black and white decision as a lot of thinking needs to be applied to determine what is appropriate to automate in that particular instance. That in itself is a skill, which I believe is overlooked.
A great interview question would be to ask what tests a candidate has decided not to automate, and find out their thought processes.
The decision points should be:
Where to focus the effort - front end UI, back end API layer etc.
The types of tests needed - functional, non-functional.
The coverage needed - smoke tests for build, full regression for overnight runs etc
Defining the areas of greatest business risk.
Understanding the complexity of the tests and working out the ROI.
Deciding on the tools to use.
Reporting on the test results.
Ownership of the tests - is it just the tester or a whole team responsibility?
How much time is there to spend on automating?
And what about ongoing maintenance?
We seem to be in such a hurry and under pressure to just automate everything, but that is damaging and wasteful. For example, why would you automate a manual test that took 1 minute to do if it took 30 minutes to code and was a low priority scenario that wouldn’t be executed that often.
I don’t think we can come up with a defined list of what you should or should not automate, but we can come up with a set of useful questions and decision points to follow.
You can also look at it from a schools perspective. have different people look at 3 different angles, (2) Regression , (1) Risk-of-fire and lastly (3) Performance.
I would obviously look at regression strategies as the long term solution, but as a bucket to catch things when it all goes pear-shaped. (1) You want a person looking at the most risky areas, and divine a test that covers only those in an integration test (smoke test). It must only test the highest priority functionality, while still bringing every single integration and interface into play. Do not test anything that a salesman cannot show you in the first 5 minutes for the smoke test suite. This will prevent fires breaking out by detecting integration faults early and decisively. It needs to be designed to save time, and be the oracle for the health of your development process by being very quick to blow up after bad code gets dropped. Your fire prevention test need to be easy (low cost) to maintain, because it should almost never change, it’s your baseline for going back to when in a hurry, or when looking for the long view over the year.
(2) Integration tests become very expensive the longer they run for and the more mature they become, so steer back to something to mock the interfaces so that you can build a Component test suite to cover you for regressions. This will involve writing a generator that builds some of the mock layer perhaps. I would look at all the advice you have been given here like Steve above, and fold that into a regression testing strategy that tries to test everything. This is where you can play and change approach often. Regression testing has no silver bullet recipe, change the tools often, change priorities. This will be your biggest time sink, so come up with a way to calculate ROI on every bit of automation. If you are regression testing only in integration, developers will find ways to blame other components for bugs. The trap of regression testing with a full stack, not only slows testing and the feedback loop, but promotes bug tennis.
(3) Performance : (and scale) this is not the same as regression testing, and probably needs a completely separate team or person on it.
Knowing what not to automate (as in a choice of what you wont automate), and knowing what you can’t automate (as in it’s impossible to automate) are 2 different things. You should ask both of these questions…
Knowing what you can’t automate should be easy to answer: You can’t automatically assert anything that you dont have an expectation for… For that, you need investigation through exploratory testing. This can be risks, other perspectives on properties and variables that are unknown or that we are unaware of (at the moment).
I’m still surprised how many people still struggle to understand this - and I’m talking about people who are skilled in writing automation scripts… It goes back to your point of knowing the theory behind automation being more valuable that knowing how to automate.
With the other question on knowing what you shouldn’t automate, this is a much harder question to answer as it completely depends on teh context in which the question is being asked. The expectations around risks and quality criteria from the stakeholders all play a part in forming the context for being able to answer this question.
Bas is really good in this area - focusing on automation strategy rather than the tools etc.
I recently attended a Meetup where Lee Crossley spoke - he suggested to automate the low-hanging fruit. If you’re wanting to start web service automation - start off with the ping/single user test so you can easily identify when your APIs are down.
If you’re starting with Performance testing - do a single user load test to ensure your expected response times are met.
Last week, I watched a Webinar called “Mastering Test Automation: How to User Selenium Successfully” (also available on Youtube. It is an hour long video but I was taking notes. According to the video, the first step was Define A Test Strategy. There are four (4) questions to ask:
How does your business make money? If not in $$, what features generate the most value for your users?
What features of your applications are actually being used?
What browsers are your users using to visit the application/web page under test?
What have things broken in the application before? How to find out?
Defect tracking (issue tickets)
General conversation with the developer.
Software releases
The outcome would be, what features will be tested and which browsers to care about.
Only test critical features, not everything.
Day 3 of #30daysoftesting
What to automate?
The automation requirements define what needs to be automated looking into various aspects. The specific requirements can vary based on product, time and situation, but still I am trying to sum-up few generic tips.
Test cases to be automated
Tests that need to be run against every build/release of the application, such as smoke test, sanity test and regression test.
Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
Tests that need to gather multiple information during run time, like SQL queries and low-level application attributes.
Tests that can be used for performance testing, like stress and load tests
Tests that take a long time to perform and may need to be run during breaks or overnight. Automating these tests maximizes the use of time.
Tests that involve inputting large volumes of data.
Tests that need to run against multiple configurations — different OS & Browser combinations.
Tests during which images must be captured to prove that the application behaved as expected.
Since I’m new to this , I’m going to quote someone wiser than me but who I agree with. I have been reading Alan Page’s book, which is more “automation philosophy” than a technical book. But I really like what he says here, because it jive with how i got interested in automation - not just automated testing, but all around task automation:
Good testers test first – or at the very least they think of tests first. I
think great testers (or at least the testers I consider great) first think
about how they’re going to approach a testing problem, then figure
out what’s suitable for automation, and what’s not suitable.
…
I have my own heuristic for figuring this out – I call it the “I’m
Bored” heuristic. I don’t like to be bored, so when I get bored, I
automate what I’m doing. When I’m designing tests, I try to be more
proactive and predict where I’ll get bored and automate those tasks.
You’re a tester first an automated tester second. This makes sense to me.
I also get bored easily, and I don’t like wasting time. I don’t even have to COMPLETELY automate anything. I’m working on an Autoit script now for something that is connected to a physical device that requires user interaction, but which requires many, many repetitive keystrokes at the computer. I can’t automate the whole thing (without building some kind of robot…it’s possible) but the parts i can automate, I will automate, and I’ll get the job done a lot faster and with less mistakes.
I’m new to testing, but I imagine it’s the same, and the Page quote was nice because here was an experiences person saying what i was already thinking.
I know this is not as specific as some of the above contributions,but I hope it is a useful contribution.
The folks who I work with are still in the “automate all the things” mindset, and we just don’t have the resources to accommodate that. We’re currently dealing with a large number of manual regression test cases that are slated to be converted to automation scripts, and we’re in need of a way to prioritize what to attack first, if at all.
In her talk, Angie walks through a formula for weighing the value of various test cases, across a set of criteria. The output of the formula is a score for each test case, which can be used to help you decide what to automate and where to start.
I’ve prepared a version of the worksheet that I presented to my managers, and I’ll be pressing to use it going forward in the hopes of preventing us from becoming mired in an unmanageable backlog. (A girl can dream, at least…)
The automation requirements define what needs to be automated looking into various aspects. The specific requirements can vary based on product, time and situation, but still, I am trying to sum-up a few generic tips.
Test cases to be automated
Tests that need to be run with every build of the application (sanity check, regression)
Tests that use multiple data values for the same actions (data driven tests)
Complex and time-consuming tests
Tests requiring a great deal of precision
Tests involving many simple, repetitive steps
Testing needed on multiple combinations of OS, DBMS & Browsers
Creation of Data & Test Beds
Test cases not to be automated
Usability testing – “How easy is the application to use?”
One-time testing
“ASAP” testing – “We need to test NOW!”
Ad hoc/random testing – based on intuition and knowledge of application
Device Interface testing
Back-end testing
Prioritize those tests that are to be automated, give them weight on risk and ease of automation. Also, make sure you keep a right balance of unit tests and automated functional tests. You can then place them into priority order and plan accordingly.
Nice list!
I´ll would take the “Complex TCs” out of the automated ones. Only easy TCs should be automated.
Complex automated TCs are really painfully to debug and fix if they became flickering or brittle tests.