Automated test planning


(Ross) #1

Hi all,

I’m just at the beginning of planning for automated testing for a website (Selenium C#). Planning on using a mind map to establish structure.

I was wondering if anyone had a preferred method or useful resources for the planning phase. I have seen simplistic examples in online courses, but it would be good to hear practical experiences.

Thanks a lot!

Ross


(laurent) #2

Hi !

Can you elaborate a bit the question?
What are you planning:

  • the development on the automated tests? (like who does what when…)
  • the execution of the tests once they are written?
  • something else I did not catch?
    => what kind of element would appear on the mindmap?

Laurent


(Bob) #3

I suggest that you concentrate on a relatively small but high business value path through the site first, rather than trying to do everything. Just the normal risk-based stuff, but applied to the site.

For instance, If there’s a login form, include that because if users can’t log on, or they see the wrong person’s account when they do log on, that’s generally bad news. (Similarly, if unregistered people can log on, that’s also bad news.)

Then what you do after that depends on the site. What is a small, well-defined but valuable / risky operation to do on the site? Write a test for that first and run it, as that will: a) make you feel good that you’re achieving something, b) let you show other people e.g. managers / developers the value of your tests, c) let you make course corrections in your tests before you’ve implemented loads which would then need changing.

Sorry to not give more than general points, but I hope that helps.


(Will) #4

In terms of planning the tests themselves, we’ve always based them off solid manual tests first. We go through the manual plan and identify:

  • What can be automated
  • The value and complexity of writing each automated test
  • What we shouldn’t or can’t automate

This allows the team to have an overview of what it is that needs doing, and where a good place to start may be (high value, low complexity).

In terms of planning the code itself, we haven’t written detailed plans laying out all the classes or files we’ll need before. We usually stop at general features, and then allow the person writing the framework to decide how to best achieve those (with peer review along the way).

Things like:

  • We need to be able to do X
  • It needs to be able to run across multiple environments (dev, staging) with minimal effort
  • It needs to be able to run across different white-labelled versions of the site

etc.

I agree with Bob about testing it early and often to avoid running into issues that are much more time consuming to solve the more code you have. Write some framework. Write a test on it. Write another test. Are you writing a lot of boilerplate stuff in your test? Maybe the design needs to be refined to remove those initial repeated steps.

As Laurent says, you may be able to get better advice from us if you’re more specific about where you need guidance.

If this is your first automation project, then welcome :slight_smile:


(João Farias) #5

I have written about a ceremony in planning to discuss testing (in general), which includes analysis of what and how to automate.

In a nutshell, the idea is to discuss story by story, acceptance criteria by acceptance criteria, together with the development team about the implementation of the features, to be able to create tasks to automate the necessary aspects.

OBS: Do not worry about the Scrum word… the idea fits any planning moment.


(Ross) #6

This is excellent! Thank you so much everyone. Really great advice all round.


(Ross) #7

Thanks a lot Joao! I’ll definitely look at this.


(Ross) #8

Hi Bob, thank you very much for your reply. I’ve started with the login page and I’m going to build on that. Great advice.


(Zeff) #9

Ross,

You’ve received great advice so far. Just thought I’d add a little (hopefully good) advice to tie things together.

I like Will’s approach to determining what to automate, but you first have to decide what to test, which seems to have been handled by whoever takes care of his manual testing. For that end of things, I like to use Bach’s Heuristic Test Strategy Model. It helps me with the initial analysis of the project so I can determine where I need to focus and how I need to approach it. I walk through each piece to get a better understanding of what I’m up against, using mind maps to create things like a product coverage outline that becomes the basis for my test focus areas. Those are then broken down into exploratory sessions/automated tests, or sometimes both. I also use this PCO to track my test ideas, whether it’s general automation, exploratory test, or automated acceptance testing.

At the same time, I’m keeping up with stories that move across the board, which sometimes requires some additional conversation to really nail down the criteria per Joao’s comments. This constant analysis feeds my PCO as well, which is updated on a regular basis. We use a SCRUM/Kanban process, so stories are tracked as cards ala Trello-style, with the acceptance criteria I’m going to automate being one or more child stories that need to be completed for the feature story to be accepted.

That doesn’t work for everything, so stories I can’t get to with automation are verified through hands on acceptance testing and a backlog item is created for writing the test automation, so I keep a separate parent story for all test automation efforts. Everything is tagged so I can report out on the actual development process, since test automation is both a dev project and a test project at the same time.

All of my test automation plus my exploratory testing feeds a simple test dashboard where I can roll up a somewhat subjective score on the amount of effort involved in testing an area, its overall coverage, overall quality, and how much of the automatable test case coverage has been completed. It paints a decent picture for management without having to give them pass/fail numbers which can be so easily misinterpreted.

We get tons of info on automation, but so much of it ignores actual testing as if thinking about it is of secondary importance. Don’t fall into that trap. A million automated test cases aren’t going to help you if you aren’t testing the right things.


(Vishal Dutt) #10

I would like to answer based on practical experience I have on multiple projects. The test case is related to specific module and specific phase, hence needs to be executed accordingly as per release.

Agile Scrum process: I would first prefer to go with Agile scrum process and assume you take every Sprint duration as 15 days. When you plan the Sprint, you need to add the test cases which would be automated within the Sprint. As I mentioned, you need to pick the test cases carefully. I would suggest to pick the test case which belongs to Sanity first followed by P1, P2 and so on. Here, automated means you test cases is automated and started running on continuous integration tool like Jenkins. At the end of every Sprint, you will find a healthy results of your automation. Most of the software testing companies follow the same process. It also gives the good return on investment. Here in this every engineer add the details on the Sprint tasks on daily basis and keep the status of the ticket upto date for the clarity on the progress.

Agile Kanban process: Here, you need to prepare some short lists of Sanity/Smoke test cases and need to divide them in Phases. Pick the first phase with high priority test cases. Once all the test cases are automated, execute them on continuous integration tool. Alternatively, some people don’t use CI tool properly, you may execute nightly on your machine or dedicated machine with maven or testNG. Once all the test cases are automated from first phase, prepare list of test cases for next phase. In this process, there is no duration of the planned test cases, engineer adds the hours in the Task management tool on daily basis.

Long term release process in project: Let’s you have less releases in your development cycle of project. Pick the test cases of Sanity/Smoke module. Automate them one by one, execute them in batch and work on failure analysis. In this process, there are high chances that your scripts will get failed due to Batch execution issues. You will have spend time on failure analysis.

So, as per my observation go with one of first 2 processes to get the result of Automation. Hope this information will be helpful for you.