Should you create manual test script before automating it?
I currently never produce a manual test script. But before I write the automation, I do:
- Remind myself of Acceptance Criteria
- Identify what I want to test
- Explore the system and do a manual validation of the behaviour
- Capture expected results, such as API responses
- Write out some comments in my code to prompt me of the steps I want before I code them up
In some cases as I write the automation, I realise my original noted area are incomplete or won’t work, so I adapt as I go.
I am using Jest so the tests end up reasonably descriptive if I get the naming right.
And it’s worth noting my Test Code is under source control and goes through Peer Review before being merged into main.
I can’t tell you if you should do this, but it is working for me.
If you are going to reuse the manual script for another reason later, then you might get value from writing them and storing them. If it helps you do the Test Analysis (thinking) or you are more comfortable doing it, just write them.It isn’t a waste if it’s useful for you.
Although where you write it down is not important, it can even be on a whiteboard that gets erased. But I can recall more than once automating something, and after becoming over invested and taking too long, to get it reliable; realizing that manually running the test once a week would have eventually shown that the test case was absolute rubbish to start with.
If it’s an area you have never automated before or you need to write more than 50 lines of code to automate it, take a moment to run the manual test case , and in as many possible different ways a few times first. Then paste that process into your source code as a comment at the top to remind you what
you are trying to emulate.
As @azza554 also just pointed out, the manual test script itself is useless to the automation code, don’t try to duplicate the manual test, they are different things. see below:
I’ve never seen a manual test script that worked well as an automated script. Manual tests cases tend to have: test data, low level steps, and UI information tightly coupled together. i.e. it’s hard to separate those pieces out. With automated tests you generally want those 3 pieces loosely coupled so you can change any of them with little impact to the other 2. Thats where good programming practices, and design patterns can help you out. e.g. Page Object model for UI tests.
You always should be thinking about what is appropriate to automate, and not falling into the “automate everything” trap. Which we all do at times.
Similar to what has already been mentioned, I make some notes when I’m designing and writing my automated tests as a reference. But these are only useful while the work is in progress. Once my tests are written, the notes aren’t valuable anymore.