How to organise test cases for configurable application

Hi there, I checked few different topics about writing test cases but none of them really answered my question. I’m struggling with how to write test cases for application which is configured differently for each client. I think it will be the easiest to explain with example - it is a made up example, but i hope it illustrates the problem:

Imagine you have an application that is used in different companies. Application contains many different parts. Each part has few different ways to behave. Each company selects which parts are visible in the app and in which way they should behave. Whenever an application is changed we have to verify that the configuration that a company selected works as desired - that all selected (an no other) parts are visible and that they behave as required.

When we started working on the application and writing test cases we had only 1 client and all test cases were specific to the configuration of that client. We have a second client now and for that one we still use the same list of test cases. Because it is a new client, everyone (still) knows their requirements so we just ignore in the test cases the written “specifics” for the first client and “in our minds” replace them with the current client specifics. However, this will not work in the long run.

I see two options:

  1. Duplicate existing test cases and modify them according to new client specifics. Do that for each new client. Keep test cases for each client in their “client suite”.

PLUS:

  • it is easy to do
  • all client data is contained within the test case
  • only client data which is relevant for the test case is included in the test case

MINUS:

  • when a part of the application changes we have to modify or add the same test cases in all suites,
  • client specifics are divided between multiple test cases (bad overview),
  • if we want to test sub-set across multiple clients, we have to select sub-set in each suite
  1. Keep one set of test cases, make them generic and somehow link specific client configurations to them

PLUS:

  • no duplication, easier maintenance,
  • all client specifics are in the same place (good overview),
  • easy to select sub-set of cases for testing

MINUS:

  • I don’t quite know how to do it so we don’t end up with a list of requirements in one place and a generic set of test cases where it might not be clear which test case refers to which part/configuration.
  • More time will be spent looking for expected results in the list of requirements.
  • Test cases could become too generic / vague.

In my mind option 2 looks better for the long run, if there is a good way to do it. Any opinion or advice from someone who experienced this would be very welcome. Tool wise: we used Excel for test case tracking but we are transitioning to TestLink.

Thank you!

Maybe test cases aren’t the best way forward. By using checklists, risk catalogues, charters and so on you could cover the necessities without having to rely on test cases which are, as you’ve found here, very specific and hard to manage. You’ll gain other advantages, such as more engagement with the product, less tedium, more variety in your testing which helps find other problems and you still get the coverage you’re looking for. You can very easily create checklists and charters that can be combined or adjusted and they’ll always be accurate if the charters are made at the time the testing takes place - which means much less maintenance and less risk of having incorrect or out of date test cases. If testers take notes (you can determine what you need in the notes ahead of time) then you don’t have to decide on test case steps ahead of time because everything that should be written down will be - every time! Worth a go, I think.

2 Likes

We are in a similar situation as we have a product that is multi versioned and multi tenant.

Without divulging too much information we look to separate functionality from configuration. How different clients are configured is different to what the functionality covers. So, for example, a flow that covers all potential variables testing the default model, tests through equivalence, all client variables.

I’m not sure that is clear enough to be helpful but any questions I’m happy to try to answer.

Thank you both for sharing. You gave me a lot of good ideas but I’m still struggling. It would help me to see your approach on a concrete example (i made it up, i hope it will serve the purpose well).

You have web shop.

  1. One client has an instant purchase option. If user clicks on an item, he directly triggers a payment process.
  2. Another client has a cart option. In this case, if user clicks on an item, item is added to a cart. User has to click pay option to trigger payment process.
  3. Third client has a preview option. If user clicks on an item, he opens a page with more details. Within this, it depends on option 1, 2 or 4 whether this goes into instant purchase or cart.
  4. Fourth client has a mix of cart and instant options: user triggers instant purchase if the item is worth more then 100, otherwise item goes to cart.

How would you approach this?


@kinofrost I’m not familiar with the term “charter” in QA field. Could you briefly describe it or share a relevant link explaining how it works. I searched it but apparenlty google thinks I need vacation because all I got were flights and sailing charters :wink:

Sure! A charter is, essentially, a mission statement for some testing. They can be vague or specific. It might be “Explore the upload feature” or it might be “Explore the upload feature, looking especially at different file formats. JPEG and PNG files must be accepted, but we must try non-image files with valid image file extensions”. It might indicate what kind of testing is to be done, what areas of the product are to be tested, what risks you are testing for or even what tools to use.

What you are looking to do, in short, is identify what your test cases are doing then write charters that cover the same ideas. A tester then tests to that charter, ensuring the charter has been covered, but can make notes on new test ideas for more charters to be made, or explore things that are found but aren’t in the charter. It’s very flexible, it allows looking beyond the limitations of a test case, it allows people to explore in different ways to find different problems and it’s easier and more fun to engage with. They’re also WAY easier to write, and communicates much higher level ideas so it’s WAY easier and cheaper to maintain.

They are usually associated with test sessions and session-based test management, for which I have this link: https://www.satisfice.com/download/session-based-test-management.

If you’re going sailing take me with you :).

1 Like

I’ve always worked in a tester-driven-testing environment. That means that if we work with test cases it’s because we’ve written them. The problem with testing is that it is exploratory - what we learn now will change how we behave later. I might find a feature I didn’t know existed and discover that we need to test it. I might find that a feature works but isn’t fit for purpose so I have to talk to the team about if we’re going to change it.

I’d start with what I’d call a recon session with the charter “Explore the product, covering the instant purchase, cart, preview and cart-instant options. Use different clients to see how they affect the process”. This will let me explore the options and understand how the system works and help me see any problems I think I might have later (such as good test data). Obviously a seasoned expert will already know this and might not need any recon sessions.

During my recon session I will note how things work, such as how clients’ access to options is controlled. I note down any questions I have, such as “is there a way for clients to cancel accidental purchases, especially from instant purchase? When is this cancellation no longer permitted?”. I will note future testing ideas like clicking on instant purchase when the client has no payment information entered. I will note concerns about the testing - for example if I cannot create clients in the system I cannot make test data and I’ll need it made for me or access to make them myself.

Then I’ll have a better idea of what I want to test. I might then create a set of charters such as:

  • Explore instant purchase, ensuring clients of various types can successfully make a purchase.
  • Explore instant purchase for customers who cannot make a purchase (frozen account, no payment info, invalid payment info).
  • Explore ways in which instant purchase may fail (such as dropping internet connection before purchase and reconnecting, or clicking the button very rapidly)
  • Explore cart-instant clients, ensuring the value limit causes the purchase to go to cart. Include boundary testing, and try different currencies.

Obviously this can go on for a while. I might, during my exploration, find a set of checkable values that pretty much always need checking and are boring to look at and might automate them.

This is just one way of approaching the problem, and the charters need not be written by the testers. They should communicate what you want from the testing, but not exactly how to do it step by step. You can use this to explore the features of your product and they can even all be client independent. You could have specific client configurations you must test with (e.g. a big customer). You could have charters for each part of the product, some of which specify testing different configurations - maybe a checklist of pairwise combos of configuration settings. You are no longer forced to do things a certain way because of how your test cases are written, you can explore freely based on what the project is doing and how it’s changing, or newly discovered risks.

Hopefully this helps!

2 Likes

Thank you! It helps a lot. It confirms that our testing process during development of new requirements or during customisation for a certain new “configuration” is fine. I know the principles that you describe as charter simply as exploratory testing and we’ve been following them (maybe not in such structured way).

However, your answer also made me realize that my question can be understood in a very vague sense. So let me try to improve upon that. :slight_smile:

We resort to test cases for regression testing before new releases. The idea is that test cases are a granular collection of “all possible” features and/or (positive) flows. In a way it works like a checklist, so we don’t forget to check something. Why test case format: because it contains steps, so you don’t have to think about ‘how do i trigger this’ and expected results, so you don’t have to go searching for ‘how should this work’.

Because releases are frequent, it helps if these ‘checklists’ are concise, straightforward, easy to understand and selfcontained. They are meant to put management (and testers) to rest, because someone overviews the whole app before going live. For example, we might be working on feature X. We asses that this feature affects existing feature A. So we perform the kind of testing that you describe on X and A. Noone will look at unrelated feature B at this point. But during regression testing we will check also feature B and we might discover that oh wait, this is affected by work that was done on X. After all these years it still gets me by surprise, how can feature B be broken all of a sudden if noone worked on anything even remotely connected to it. :wink:

I hope this clarifys a bit where I’m coming from. Regressions are a huge fear for our management so in our case this part is important. And my problem now is, because we have more then one ‘configuration’, how should we write down these ‘checks’ so that we will still cover “all” as we did before but keep the whole system manageable? I included the example in the previous post to showcase what i mean with ‘configurations’. It can be everything from different limits that trigger different results to different behaviour of the same ‘feature’ depending on what other features are included or not.

Maybe the process needs updating, not cases. I’m open to new ideas if the discussion should go from “how to organize test cases for configurable application” to “how to change regression testing for configurable application”.

Thanks :slight_smile:

Very agile, very much a build-on-success, and very much a learn as you go while making observations, @kinofrost! Everything I believe testing can and should be!

Joe

2 Likes

Hello @gabrijela!

I’ve been following this thread from a couple points of view. Testing your product is one of those interests. I wonder if you might step away from thinking about test cases. Most testing has an information objective (as described in the BBST Foundations course). When I think of information objectives, I ask what I want to learn (rather than confirm) from testing. This might open new testing ideas for regression.

The other point of view is as a client. I may be a client of such a product that defines configurations by client. As a client, I’ve been considering testing approaches for such a product. So far, I know I have to verify a configuration we define. While I might have used unit tests, database inspections, and code exercising for an application our team created, I can’t do that for a vendor product that we can only configure. One approach is to attempt to discover evidence of the configuration through the UI. Just starting this journey!

Joe

1 Like

Another interesting perspective, thank you.

Let me ask something different then. On a long term project for a complex product, how do you store the knowledge about the product and how do you share it with the new testers?

Great question, @gabrijela!

Most of our platforms maintain a library of application guides that anyone may review. The guides are updated during projects so the information in them is usually reliable. A typical guide contains information on architecture, tools, technology, development, testing, and operations.

Joe

1 Like

@kinofrost (others of course welcome to answer as well) How do you keep the notes and the answers to the questions for future sessions and other team members? Where/How do you document charters for later reuse (if you do that)?

I have these things in the form of cases and this worked fine so far.

  • Some cases are very vague (more like charters), i.e.: “steps: check localization” - “expected results: text, numbers, dates are in correct format”
  • And others are very specific, i.e.: “steps: add item to cart, wait 5 minutes, try to cancel” - “expected result: cancel is not possible anymore”.

But with multiple clients I’m now stuck how to document different varieties:

  • Take for example cancellation: one client has cancellation period x, another y, third one does not have cancellation option.
  • Situations would be i.e.: under setup X cancellation shouldn’t be possible, under setup Y, cancellation is possible depending on the parameter Z.

How do you keep the notes and the answers to the questions for future sessions and other team members?

I use OneNote for nearly everything. Each charter is a note. When I’m done (session completed and hopefully debriefed) I export it as a PDF and attach it to a JIRA story so anyone knows where to look for it.

If you don’t need notes or a specific tool then don’t demand them because writing things down costs time and attention away from testing. So decide what you need and only demand that and then have people decide how to achieve their work within the limits you’ve set. You don’t need one golden template, you just need to state what must be written down and if you need it in a certain format or put in a certain place. Then people can use what’s best for them.

Where/How do you document charters for later reuse (if you do that)?

Because everything is in OneNote nothing ever gets deleted, so I can go back to any session I like. I have templates for sessions I tend to run a lot like one with a lot of stuff already set up for Recon sessions.

  • Some cases are very vague (more like charters), i.e.: “steps: check localization” - “expected results: text, numbers, dates are in correct format”

That’s pretty much a charter, excepting that a test case expects only the contents of the test case be executed while a charter permits things that are not in the charter. So while I might write this charter as “Check localisation, ensuring text, number and dates are in correct format so that different language users” that would be to explore localisation. I have to determine what correct format means, but I could look into how localisation works and see what might break it. I could find that putting big text into a localisation file breaks the UI because it doesn’t fit, for example. I’d consider that part of exploring localisation. I might make (or refer to) a mindmap (or some other model) of things I want to check when I explore localisation such as putting big text in, checking the list of languages we support, checking dates and times including timezones and any functions or operations that rely on those dates/times, looking for any legal statements such as copyright or licencing that must obey local laws, text formatting, differences in alphabets and using non-English characters like é in input fields (and flow testing those to their output), any phone numbers or addresses or postcodes, and so on. I’d also look into how localisation is triggered, such as by Windows or through the browser and what the system does if it can’t find the localisation information, and if it gets stuck in English can the user change it?

  • And others are very specific, i.e.: “steps: add item to cart, wait 5 minutes, try to cancel” - “expected result: cancel is not possible anymore”.

Without understanding what cancelling is here I’d write that as “Ensure that cancelling an item after adding to the cart cannot be performed after 5 minutes, as <reason why it shouldn’t be allowed>. Include boundary testing”. Again the difference here is that one way of doing that is to to add it to the cart, wait 5 minutes then cancel it but another might be to determine what decides that 5 minutes is up and if I can alter it (such as system time, or some time in the database which I might be able to mess up by changing my timezone).

But with multiple clients I’m now stuck how to document different varieties

It depends on how and why you want to order your testing. With a charter that says “Explore cart cancellation for various cancellation periods including 0 (no cancellation period)”. That would also cover whatever mechanism sets the cancellation period upstream and the validation on that input, in case we end up with a really long time because someone added an extra 0 or thought it was seconds not minutes. I’d also want to know why clients have different times and what mechanisms are in place to cancel if someone wants to allow it in a particular case. I still have to cover the original equivalence partitions in the charter (try it with 0 and a few others and make sure it can work for those) but I also find new problems and questions and ideas.

If you want something simple like trying without cancellation and with 5 minutes and want it to happen again and again you can write a test case or write a check in automation. If you want it properly explored you can use a charter. If you want something in between then you should begin to ask yourself why you’re checking all of those numbers. What’s the actual difference between 0 and 1 minutes? How about 1 and 2 minutes? Every test, especially as part of doing a test case, is a cost, and it must pay for itself by being part of a test strategy. It’s possible to think of a reason to do anything, but given limited resources (such as time) you have to pick and choose what to do and so choosing the right thing becomes important - while someone could check 0, 1, 3, 5, 7, 10, 30 and 60 it’d be easier to look into how the underlying mechanism works and work out what might go wrong and investigate those risks, or maybe have a rig that helps us simulate time passing, or maybe cover it with unit tests plus a charter.

Hope that’s helpful!

It is, thank you.

How does this work for regression testing? Or is this something that you don’t do?

I don’t do it, but when I used to do it we started with a long checklist of things to check. They weren’t test cases exactly, just a list of scenarios we should do some basic capability testing on.

We then had a project to reduce regression and after asking a whole lot of people it turned out that some of it didn’t actually need to be done, some had enough in automation to not need such shallow exploration, some were out of date and never got removed and some were very important. We reduced everything that wasn’t needed and ended up needing two people for a few hours instead of half the team for 2 days. I know some people have regression suites that last weeks but I’ve not experienced that. I’d encourage those people to try the same thing to reduce that as much as possible, and probably look into where the fear is coming from that the system needs to be reviewed like that - maybe look into better recovery/patching systems to save money up front or something.

1 Like

When I do regression testing it is generally from a risk perspective. There’s the product risks using the common likelihood / impact analysis. Then the ‘what’s changed’ risks. Dependency, integration, etc. risks.
When we have identified the risks we rank our possible regression tests in priority order.
We find it works well for us.

1 Like