How to cover too many branching scenarios? How to deal with too large combinatorial?

Hi all,

First question here, how exciting!

My company provide a SaaS that is highly customizable for its clients. They can all ask to configure their logo, page titles, or even the presence/absence/position of some widgets in pages. These changes are done by the company employees themselves in some part of the code.

Moreover, sometimes when updating a feature, we don’t migrate all our clients. They choose when they do it. So some (other) features must be compatible with the updated feature old and new versions at the same time. This multiplies the number of combinations or test cases.

As a result, it is obviously impossible to test ā€œall casesā€. But, most importantly, I don’t see how I could test just the most-critical scenarios on few real clients. Thus my question to you today.

  • How/Where do you execute your end-to-end (e2e) tests?
  • Is it on real data (client data), in a real environment (production), in a real instance (not a fake/setup one)?
  • And what’s your setup process for that?

I’m hesitating between using real stuff (but then I don’t know how to find the proper client for the proper test) or using generated stuff (but then I feel I’m not testing the real deal).

EDIT: Some more context:

  • I worked ~5 years on the client/delivery side of the company. Two years ago, they created a position of QA manager which I took. We’re just 2 in the QA team: me and a manual tester. I have a technical background but haven’t really coded in years.
  • Company is about 400 people total, including ~100 Technical & Product people, including ~40 developers. There is only one SaaS, but several dev’ teams work on various part of it. Clean documentation or requirements are pretty sparse, to say the least.
  • We already have unit tests and integration tests, that run automatically at each commit+MR/PR+release, so very frequently. What we don’t have is proper e2e tests. We’re barely starting with this.
  • Since we’re starting on our e2e journey, we plan on e2e-testing: (i) brand new features and (ii) feature evolutions but also (iii) existing/legacy features.
  • We will be e2e-testing them mostly to make sure the value they provide to the clients and users is protected from regressions.
  • Finally, for whatever historical or cultural reasons, we don’t have much product documentation or past requirements. So I can ask for the new features or the new evolutions, but not the existing/legacy ones. Quality is a very new state of mind not really adopted yet here :cry:

Any help would be greatly appreciated!
Thanks

5 Likes

Ideally your e2e tests should cover the core functionality.
But
As per the testing pyramid, you need to leverage more testing closer to the code itself. I.e. unit tests and integration tests.
For every feature available, it will have functions that make it work, those functions need to have their own tests that keep a check that they work as intended.
The unit tests are generally written by devs but you will find guys here who write them being software testers.
So whenever your system needs regression and smoke tests, you test scripts at all levels would be the first to run. As for the combinations then, that should be limited to the use cases defined in the new user story you’re working on.

4 Likes

Thanks for your answer. I completed my initial question with a bit more context. Mainly that our unit and integration tests run systematically.

But they cannot cover the real user-experience nor the complete happy-path scenario. For these, we need the e2e tests.

However, I don’t know how to approach testing even one single feature when it can look or behave differently depending on many parameters…

1 Like

When you test that single feature, the question is raised: Why are you testing it?

  • Did something changed in that feature?
    – If yes, test what was changed.
  • Is it a new feature?
    – Test most possible scenarios. You can’t cover 100%. and surely the devs must make sure they covered all the uses cases written down in the documentation.

Other thoughts: The unit tests, the integration tests and the e2e, should come together to cover the ā€œreal user-experienceā€.

We work on a CMMS, we have those tests too. But I don’t test all combos again if we work part of a module because it is expected that everything else which was not touched should work as part of the code quality.

2 Likes

Thanks again @hananurrehman! I (re)-completed my initial question: We have a mix of new/evol’/legacy, We test for preventing regressions, We don’t have a lot of doc/requirements.

I agree with your ā€œcoming togetherā€ of unit, integration, and e2e tests. I usually tell PM/PO/dev’ that once ā€œall casesā€ are listed, we should cover first as many as possible as unit tests, then within the remaining ones as many as possible as integration tests, finally as a last resort the few lefts as e2e tests.

However, starting from scratch on a SaaS app that is many years in the making, and have hundreds of clients representing tens of thousands of users, it is pretty daunting and the branching/combinatorial possibilities appear limitless.

We need to scale, but I’m not sure how we can?

I mean, if we use fake-setup and data-factories, it’d be much easier, but would it be close enough to the real stuff? I feel I need to test as close to the reality as possible, but maybe I’m wrong. That’s why I’m asking the community I guess :slight_smile:

1 Like

I’m glad I helped you shape up your question at-least :sweat_smile:

You can setup test data that is close to the real stuff.

Frankly speaking I have not come across a very large scale system for testing until now so I guess I may not be the right person here. I hope someone else in the club can answer you better, and I’m now as interested as you are, I really wanna see how this goes :sweat_smile:

1 Like

From my experience, I would suggest that ideally, in such a situation discussion with someone who has detailed knowledge of the project can be really helpful.

Instead of covering all the combinatorial, the focus should be on finding high-priority / risk modules and their test cases.
Since testing doesn’t have an indefinite time so you need to prioritize risk analysis and pick the high-priority module, and let the stakeholders know what you can’t pick.

We have several projects on the same codebase, so a single change in a single project most of the time impacts other projects, and so we discuss with developers and PMs, know the impacts, and then we decide the test coverage, and then we plan for test strategies.

We don’t jump into testing the tickets without discussions because it might be possible that tickets say something else however, due to technical constraints, developers implement the features in a different way, which may impact other projects in a different way.

As far as test data is concerned, we use real data because we have integrated some third parties that offer a sandbox for a testing environment, and on that sandbox, only real data works.

3 Likes

I have been through same problem last year. I tried various solutions - automating on unit level, automating e2e, managing through manual. Tbh, nothing really worked. Even I ended up failing 3 times to get that product on automation testing because of this level of complexity, wasting many months of that product.

What eventually is working now:

You would have to make the product team understand that it’s a people problem and developer would have to make it work together. Alone testers would fail and start to become bottleneck.

This I had done to align:

  1. Scoping : usually a feature when is in evolution mode or is new - I don’t automate testing for it until there are no changes in it for 2 weeks atleast.
  2. E2E: once it’s around 2 weeks for a feature didn’t had any major change, I make it’s e2e test. Production tip: don’t make unit test or integration, for such products it’s extra spend of time. Only e2e save the ass because code keeps on changing.
  3. Move it to developer: Once I have made the initial set and pushed the testcase, now everytime I find something in that feature, developer have to update the testcase.

This flow took some months but helped in reducing bugs overall slowly.

With this I started doing this

  1. Weekly manual testing: Every week I don’t exploratory testing on tool which is manual and bugs found are shared with devs to fixed by next week and test cases to be pushed as well.
  2. Every month I connect with the whole product team with an excel of how many new bugs were found, how many branches didn’t have testcase etc etc.

It’s very long term. But that level of complexity product was able to be handled this way only.

I’m still on the way of handling bugs in it. These were some ways that has worked for me.

3 Likes

Hi Christophe,

Congrats on the first question in The Club. I think you are having a very interesting challenge, although I feel you when it looks more daunting than interesting. Let me share a few insights and observations.

I think your challenge is multi-angled and breaking it down might help to see how to approach it. If I were you, I would write down what problem I want to solve. You might know it in your head, but writing it down and sharing to your team, PM, etc will make it clear and real. Maybe you’ve already done it. That’s great.

I wouldn’t wait for a perfect scenario when I have all possible use cases. I wouldn’t aim either to test all possible user cases right now. I would start unpacking the system, writing down all my findings, scenarios, prioritising them, sharing to PM and team.

Understand the problem you are solving. Align with others.

  • Are you trying to prevent regression?
  • Are you trying to figure out how to replicate prod data into your testing environment for better testing?
  • Are you trying to figure out effective way of testing all possible scenarios?

Investigate core user journeys. Implement the basic and expand if needed
You mentioned you don’t know where to start. Also you mentioned you don’t know how to scale. In order to scale, you need to ensure you have a solid base, your testing MVP - your core user journeys. You don’t need to know all possible use cases for those journeys, functionality now. What you need is to ensure that people can achieve their goals when they are on the site. What those goals are? Document them. Test them, Automate them. Integrate into CI/CD to see if you can get any feedback. Reiterate. Implement another core user journey. Observe. Are they helpful? Did they tell you anything useful when they failed?

Drive testing down the stack.
Usually E2E tests are not the one that will be helpful in a such big system testing. If you build too many of them, they will enslave you and slow you down. Testing should happen on all levels like you mentioned in one of the replies.

Look at testing from holistic point of view
I assume you might need a testing strategy how testing should happen for new and existing features. Teams can test logic, behaviour (not implementation) with unit tests, how different components come together with integration tests, version compatibility can be also tested on integration level. You can also tests components behaviour depending on user states or some unusual inputs.

I share all those things from my own experience when I joined the company with no testing, no documentation and limited access to information or customers. No one wrote any tests and production issue firefighting was a usual weekly activity. I took a similar approach.

Let me know if you have any questions. Happy to chat.

6 Likes

Random initial babbling, one may skip to the answer directly
Woah! You have made a stunning debut in the MOTverse. A very good question!!
I have so much to write that I can’t explain, thanks for asking this question. I think I would like to write a thesis on this topic!
I am smiling. I am happy. Not because I have answer to your questions, but because finally I found someone with similar problem statement. :smiley: . Only difference is I work at a place which buys such a SaaS. (I don’t think the SaaS folks test it, so we have to :slight_smile: !) I feel the product I work on is much more complex than the one mentioned the question here (imagine a year or two long development cycle for first prototype), or atleast this is what I gathered from your description, but I could be wrong in my assumption. To give you an idea of what I am referring to (in my context) – the product has about 100 modules, 10s of functions for each module and then implementing the same thing for different ā€˜client’, which then has it’s own features, combinations and scenarios. Also, there are different types of users and each feature has to have different roles/ permissions implemented! With all this - there are 15+ external systems connected to this SaaS product.
<I had started writing the day this question was posted & could not complete it. Quite a few things have been added and more context added to the question. Apart from all inputs which are valuable, I resonate with @Nat’s answer. >

My answer -
I will divide my answer in to two parts — first overall Testing strategy and then addressing specific questions that you mention in your question.

PART 1 - For a system that complex as you have explained, the Testing Strategy is important. I will recommend points below ( Disclaimer: A strategy that works at a workplace might not work for other) -

  1. With so much branching going on in the product, it may make sense to have ’Model based Testing’. (Disclaimer - I haven’t done it) Models bring structure to the complex system making testing structured too and thus managable and thus a little easier. Drawback - it takes a lot of time and effort to implement it. It is tedious to set up the models and then someone is taking care to keeping them up to date. It is not an easy task. (But then what is easy in such complex product?) Nevertheless it’s a mindset shift and way of working. If it feels right for you and your context and with the people you work with- try it.
  2. Do you have a mission/ vision statements for Testing in the org ? Vision - A written vision helps people come together working towards a goal. A vision to how testing is (or will be) done. What testing means. At what levels it should be done. And how it can be used to improve the speed and quality of the product delivery. It does (should) not come from only testers / Test manager but a combined effort of whole Product and Tech teams. (But how? The next point can help)

Intention of this activity : Not only it helps in getting a feeling of working towards a goal, it helps in understanding the present challenges and narrowing down which ones are prioritised presently. This will lead to the preset state of affairs being written down. This helps in observing over time what needs to change and eventually how vision changed and improvements made. If it sounds too much, start with a single line as a vision, as simple as - ā€˜Strive to deliver faster with quality compared to 6 months back’ Overall it can or it would help in tracking metrics which matter too- like time to market a feature.

  1. Starting a community of Practice (CoP) for Testing - Probably a bi-weekly meeting for everything around quality from setting vision to making it become a reality. <Fun recommendation - Send invite to everyone in Tech team as optional!> This point is an extension to previous point, a strong Test Strategy should be backed by leadership which trust you with your job and not influenced by middle/product/ Engineering management which might or might not give importance to Quality initiatives. A test strategy planned out with the help of collaboration by the testers and developers on board with it. A test strategy with clear scope and goal. A mission and vision which is written down. But it is a live document. One must align every quarter or two to assess if there are leaks in the ship. (I mean, noticeable issues in speed, performance and quality of Product)
    Ideally, there should be a list of items to be slated for discussion for each of these meetings and anyone can add any point to be discussed. After discussion, people can be made owners of a topic discussed to be implemenented or a decision was made in presence of xyz audience. Anyone can bring ideas and topics. Like - expect devs to talk about Tech debt, unit testing coverage probably and testers talking about proof of concept on a new tool or way of testing for a new feature or client.

Apart from this overall major topics on testing are to be addressed too –

Vertical 1 : Requirements – Are requirements clear before development starts. User Stories be written out clearly. Are testers part of requirements gathering / architect solution discussions. Clear requirements lead to clear test plans and testing.

  • Classifying requirements as Core or non- core (customer specific) helps. How does one define core features? In simple terms - It is the requirements which are given as base features to a new client. Thus, if anything new comes up for a client, you can compare it with other core first & then with other clients. << ideally, build tests for core first. Once a test is created for a feature mark it non-core and thus can be reused for some other client too.>>
  • Classifying all requirements in major 4-8 lifecycle processes (or based on codebase/ modules ) helps. Major challenge when something new comes up is, what all is it going to affect? What all needs to be retested? If feature ā€˜AlienX’ comes, which all modules we got to test. This activity is going to help you streamline that. End goal could be - to run those tests only which affect specific parts of product which are affected by the change. All this will help reduce rework.

Vertical 2: Test Environments – A stable test environment to run tests on, makes all the difference. A smaller environments with a little less commute, smaller Database to accomodate few users should be good enough. Be the owner of these environments, use it for running test automation too. Create and delete data for tests for every run. Easier said than done. But, devops and K8s can help achieving it. Defining the scope and usage of these environments would be done too. (Discussed and come to an agreement in CoP)
Exploratory Test ground(Dev/ Test) - An environment which has core features ( features which are given as a bundle to new clients) Here, exploring and testing new features on the core product should be the goal. Making a set of tests for regression helps in creating what is to be automated for e2e too. These core feature tests which should be automated first. (So, if anyone wants to know core product, they can read my tests in the repository and see their status of pass/fail too at a glance. )
Running daily smoke tests - is helpful to check environment and new application build checks. Should be a very small set of tests. (In my opinion less than 10 in number and running for not more than 2-5 minutes.)
Running daily Core Regression - Features getting merged with core should be tested to gauge system stability. It should be the tests which test the boring parts of the system which make or brake your product. So, if a test fails here on automated run, there is something really wrong and it is not a false alarm. (Should be low maintenance tests) <API tests probably, depending on the context>

Vertical 3 : Test Data - A smaller dump of real data from production, ( few fields anonymous, if there is sensitive data) can always be used. And for tests, new data should be used. (Or at least refresh after a time interval. )
Test Data creation - Utilise APIs / ask product to create endpoints which can help in creating test data. Golden Data - A data dumps can be created to make it more easy to create static data for automated test runs. Is there an API gateway exposed to clients ? If yes, it can help in creating data. (API Gateway, needs to be tested and managed for quality separately too)

Vertical 4 : Test Reusability - Structuring and writing tests in modular way will help in reusing them. - Like different clients can mostly share same smoke and Core tests .

  • Same tests written for a feature can be reused for other client’s implementation.

PART 2 - Coming to points that you have mentioned -

  1. e2e tests - Where are the new features tested presently? Does the client has its dev env, which you can access? (I assume no!) You mentioned you are struggling in identifying which happy paths or e2e flows can be tested.

Core and basic functionality of product makes a good candidate for this. Also, one can take the most profitable clients and run most basic 10 features at least to start with. Then enhance tests in quality and quantity when and where needed. -

  • But where? If there is no such e2e environment then collaborate with DevOps team to build a virtual environment (with docker probably ? ) It can be a small environment with less resources and it can be made to run only for the time it is needed to save infrastructural costs.
  • Using which data? I would only recommend real data on real environment. There should be a couple of test environments which can be deployed separately a client’s specific version of product. Test Environment management should be done to schedule and reserve environments to simulate real user experience. Here, goal should be not to automate everything but at least some thing, But, with time adding tests and automated tests with new features and issues getting discovered.
  1. Scaling the tests - The reusable tests created for core product for modules or features can come to rescue. Core tests can be run for all clients, with their added features.
    Identify when to run tests? One can runs these tests when there is a change introduced for that client.

A single Test automation repository utilising tags - mentioning core/non-core, modules, clients, regression, smoke so that tests of particular tags can be run at will. Eventually it will all help in scaling the tests.

  1. Not enough documentation - Probably start request requirements in user behavioural manner like in BDD (but, I would never use BDD in testing). Also, for every new tests which are created, can be made to follow a standard way of defining the objective of that test(like does it affect any of the core features or processes. If not then how it is branching away from core feature’s usability) with steps written in very simple and easy to read language so that they can function as documentation in future when there is feature enhancement or similar implementation for another client .

Whoa! Did I write too much? Thanks for reading!

1 Like

Most of all, do not panic.
Myself I just started as the only QA in a small concern, and I’m starting with a new test framework and to resurrect as many of the old test tools as possible, all while learning what all the apps do in a very very bespoke product portfolio, so yeah. I have a similar combinatorial branch problem. But for now I’m going with what is most common only and what is new.

It may well be cheaper to not test, but rather to fix old integrations that break. To try to actually test things that are older and probably unlikely to break in ways that are costly to fix, feels like too much guesswork. So, don’t panic, look forward mainly.

1 Like

I’m at a smallish company and I have a similar challenge. I haven’t solved it, but one thing that helps is gathering any information available from other teams. For instance, our head of training has to prepare training materials before each release. These materials are based on her extensive knowledge of how users spend most of their time in the application. When she does that, she often runs into bugs. I stay in contact with her when she is doing this, and she finds all kinds of things that my team hasn’t found yet via our scripted, exploratory, and automated tests. We help each other by reproducing issues to confirm fails, asking whether we have seen things before, etc. She also pays attention to our test results. It is a helpful reciprocity.

1 Like

in my first job, I was still a developer at that stage, the training manager was a great source of bugs. Mainly because they demo the entire product in the course of a week and tend to see course attendees trip over any places where you do have workflow ordering bugs.
This really brought back a blast from the past for me :slight_smile:

1 Like