Random initial babbling, one may skip to the answer directly
Woah! You have made a stunning debut in the MOTverse. A very good question!!
I have so much to write that I canāt explain, thanks for asking this question. I think I would like to write a thesis on this topic!
I am smiling. I am happy. Not because I have answer to your questions, but because finally I found someone with similar problem statement.
. Only difference is I work at a place which buys such a SaaS. (I donāt think the SaaS folks test it, so we have to
!) I feel the product I work on is much more complex than the one mentioned the question here (imagine a year or two long development cycle for first prototype), or atleast this is what I gathered from your description, but I could be wrong in my assumption. To give you an idea of what I am referring to (in my context) ā the product has about 100 modules, 10s of functions for each module and then implementing the same thing for different āclientā, which then has itās own features, combinations and scenarios. Also, there are different types of users and each feature has to have different roles/ permissions implemented! With all this - there are 15+ external systems connected to this SaaS product.
<I had started writing the day this question was posted & could not complete it. Quite a few things have been added and more context added to the question. Apart from all inputs which are valuable, I resonate with @Natās answer. >
My answer -
I will divide my answer in to two parts ā first overall Testing strategy and then addressing specific questions that you mention in your question.
PART 1 - For a system that complex as you have explained, the Testing Strategy is important. I will recommend points below ( Disclaimer: A strategy that works at a workplace might not work for other) -
- With so much branching going on in the product, it may make sense to have āModel based Testingā. (Disclaimer - I havenāt done it) Models bring structure to the complex system making testing structured too and thus managable and thus a little easier. Drawback - it takes a lot of time and effort to implement it. It is tedious to set up the models and then someone is taking care to keeping them up to date. It is not an easy task. (But then what is easy in such complex product?) Nevertheless itās a mindset shift and way of working. If it feels right for you and your context and with the people you work with- try it.
- Do you have a mission/ vision statements for Testing in the org ? Vision - A written vision helps people come together working towards a goal. A vision to how testing is (or will be) done. What testing means. At what levels it should be done. And how it can be used to improve the speed and quality of the product delivery. It does (should) not come from only testers / Test manager but a combined effort of whole Product and Tech teams. (But how? The next point can help)
Intention of this activity : Not only it helps in getting a feeling of working towards a goal, it helps in understanding the present challenges and narrowing down which ones are prioritised presently. This will lead to the preset state of affairs being written down. This helps in observing over time what needs to change and eventually how vision changed and improvements made. If it sounds too much, start with a single line as a vision, as simple as - āStrive to deliver faster with quality compared to 6 months backā Overall it can or it would help in tracking metrics which matter too- like time to market a feature.
- Starting a community of Practice (CoP) for Testing - Probably a bi-weekly meeting for everything around quality from setting vision to making it become a reality. <Fun recommendation - Send invite to everyone in Tech team as optional!> This point is an extension to previous point, a strong Test Strategy should be backed by leadership which trust you with your job and not influenced by middle/product/ Engineering management which might or might not give importance to Quality initiatives. A test strategy planned out with the help of collaboration by the testers and developers on board with it. A test strategy with clear scope and goal. A mission and vision which is written down. But it is a live document. One must align every quarter or two to assess if there are leaks in the ship. (I mean, noticeable issues in speed, performance and quality of Product)
Ideally, there should be a list of items to be slated for discussion for each of these meetings and anyone can add any point to be discussed. After discussion, people can be made owners of a topic discussed to be implemenented or a decision was made in presence of xyz audience. Anyone can bring ideas and topics. Like - expect devs to talk about Tech debt, unit testing coverage probably and testers talking about proof of concept on a new tool or way of testing for a new feature or client.
Apart from this overall major topics on testing are to be addressed too ā
Vertical 1 : Requirements ā Are requirements clear before development starts. User Stories be written out clearly. Are testers part of requirements gathering / architect solution discussions. Clear requirements lead to clear test plans and testing.
- Classifying requirements as Core or non- core (customer specific) helps. How does one define core features? In simple terms - It is the requirements which are given as base features to a new client. Thus, if anything new comes up for a client, you can compare it with other core first & then with other clients. << ideally, build tests for core first. Once a test is created for a feature mark it non-core and thus can be reused for some other client too.>>
- Classifying all requirements in major 4-8 lifecycle processes (or based on codebase/ modules ) helps. Major challenge when something new comes up is, what all is it going to affect? What all needs to be retested? If feature āAlienXā comes, which all modules we got to test. This activity is going to help you streamline that. End goal could be - to run those tests only which affect specific parts of product which are affected by the change. All this will help reduce rework.
Vertical 2: Test Environments ā A stable test environment to run tests on, makes all the difference. A smaller environments with a little less commute, smaller Database to accomodate few users should be good enough. Be the owner of these environments, use it for running test automation too. Create and delete data for tests for every run. Easier said than done. But, devops and K8s can help achieving it. Defining the scope and usage of these environments would be done too. (Discussed and come to an agreement in CoP)
Exploratory Test ground(Dev/ Test) - An environment which has core features ( features which are given as a bundle to new clients) Here, exploring and testing new features on the core product should be the goal. Making a set of tests for regression helps in creating what is to be automated for e2e too. These core feature tests which should be automated first. (So, if anyone wants to know core product, they can read my tests in the repository and see their status of pass/fail too at a glance. )
Running daily smoke tests - is helpful to check environment and new application build checks. Should be a very small set of tests. (In my opinion less than 10 in number and running for not more than 2-5 minutes.)
Running daily Core Regression - Features getting merged with core should be tested to gauge system stability. It should be the tests which test the boring parts of the system which make or brake your product. So, if a test fails here on automated run, there is something really wrong and it is not a false alarm. (Should be low maintenance tests) <API tests probably, depending on the context>
Vertical 3 : Test Data - A smaller dump of real data from production, ( few fields anonymous, if there is sensitive data) can always be used. And for tests, new data should be used. (Or at least refresh after a time interval. )
Test Data creation - Utilise APIs / ask product to create endpoints which can help in creating test data. Golden Data - A data dumps can be created to make it more easy to create static data for automated test runs. Is there an API gateway exposed to clients ? If yes, it can help in creating data. (API Gateway, needs to be tested and managed for quality separately too)
Vertical 4 : Test Reusability - Structuring and writing tests in modular way will help in reusing them. - Like different clients can mostly share same smoke and Core tests .
- Same tests written for a feature can be reused for other clientās implementation.
PART 2 - Coming to points that you have mentioned -
- e2e tests - Where are the new features tested presently? Does the client has its dev env, which you can access? (I assume no!) You mentioned you are struggling in identifying which happy paths or e2e flows can be tested.
Core and basic functionality of product makes a good candidate for this. Also, one can take the most profitable clients and run most basic 10 features at least to start with. Then enhance tests in quality and quantity when and where needed. -
- But where? If there is no such e2e environment then collaborate with DevOps team to build a virtual environment (with docker probably ? ) It can be a small environment with less resources and it can be made to run only for the time it is needed to save infrastructural costs.
- Using which data? I would only recommend real data on real environment. There should be a couple of test environments which can be deployed separately a clientās specific version of product. Test Environment management should be done to schedule and reserve environments to simulate real user experience. Here, goal should be not to automate everything but at least some thing, But, with time adding tests and automated tests with new features and issues getting discovered.
- Scaling the tests - The reusable tests created for core product for modules or features can come to rescue. Core tests can be run for all clients, with their added features.
Identify when to run tests? One can runs these tests when there is a change introduced for that client.
A single Test automation repository utilising tags - mentioning core/non-core, modules, clients, regression, smoke so that tests of particular tags can be run at will. Eventually it will all help in scaling the tests.
- Not enough documentation - Probably start request requirements in user behavioural manner like in BDD (but, I would never use BDD in testing). Also, for every new tests which are created, can be made to follow a standard way of defining the objective of that test(like does it affect any of the core features or processes. If not then how it is branching away from core featureās usability) with steps written in very simple and easy to read language so that they can function as documentation in future when there is feature enhancement or similar implementation for another client .
Whoa! Did I write too much? Thanks for reading!