Hi there, I checked few different topics about writing test cases but none of them really answered my question. I’m struggling with how to write test cases for application which is configured differently for each client. I think it will be the easiest to explain with example - it is a made up example, but i hope it illustrates the problem:
Imagine you have an application that is used in different companies. Application contains many different parts. Each part has few different ways to behave. Each company selects which parts are visible in the app and in which way they should behave. Whenever an application is changed we have to verify that the configuration that a company selected works as desired - that all selected (an no other) parts are visible and that they behave as required.
When we started working on the application and writing test cases we had only 1 client and all test cases were specific to the configuration of that client. We have a second client now and for that one we still use the same list of test cases. Because it is a new client, everyone (still) knows their requirements so we just ignore in the test cases the written “specifics” for the first client and “in our minds” replace them with the current client specifics. However, this will not work in the long run.
I see two options:
- Duplicate existing test cases and modify them according to new client specifics. Do that for each new client. Keep test cases for each client in their “client suite”.
- it is easy to do
- all client data is contained within the test case
- only client data which is relevant for the test case is included in the test case
- when a part of the application changes we have to modify or add the same test cases in all suites,
- client specifics are divided between multiple test cases (bad overview),
- if we want to test sub-set across multiple clients, we have to select sub-set in each suite
- Keep one set of test cases, make them generic and somehow link specific client configurations to them
- no duplication, easier maintenance,
- all client specifics are in the same place (good overview),
- easy to select sub-set of cases for testing
- I don’t quite know how to do it so we don’t end up with a list of requirements in one place and a generic set of test cases where it might not be clear which test case refers to which part/configuration.
- More time will be spent looking for expected results in the list of requirements.
- Test cases could become too generic / vague.
In my mind option 2 looks better for the long run, if there is a good way to do it. Any opinion or advice from someone who experienced this would be very welcome. Tool wise: we used Excel for test case tracking but we are transitioning to TestLink.