So thing is about the documentation of FTS. Is it mandatory to use an excel file?is there any simple tools for free? i checked TestPad it is very user friendly. More, should we adopt same boring and numerous columns and naming (test id, test step, pre cond, status,…). What i found difficult is running same test on multiple browser. Any ideas?
Some thoughts of me:
- I prefer the right mix of test cases, checklists, and exploratory testing. If I can prove what I tested, then it is good enough. I used exploratory testing in a highly regulated environment, so it is possible.
- Excel or any other tool can be used for test cases, if the test cases are structured in a proper and maintable way. It also depends on the context.
- The name of the test case could be extended with descriptive text of the browser. E.g. login_chrome90 means, that login must be tested on an device with Chrome 90.
The precondition should be extended for each test case.
Some drawbacks are that an operating system like Windows10 is not taken into account. Also a lot of copy and paste might lead to errors.
- The name of the test case could be extended with a descriptive texts of the browser and operating system. E.g. login_chrome90_windows10 means, that login must be tested on a Windows 10 device with Chrome 90. Also in this case test cases may not be updated properly when things are changing.
- The test case has two parameters for the browser and operating system. It is also called Keyword Driven Testing.
E.g. login(chrome90, windows10) means, that login must be tested on a Windows 10 device with Chrome 90.
A test case would have a name like login(<used_browser>, <operating_system>). The precondition would be like:
the user uses a device with <operating_system> and <used_browser>.
A separate table could contain the combinations of browser and operating system with the most product risk.