A few years ago I was trying to come up with a toolkit (in addition to all the other toolkits that exist) to help solve complex test problems.
I define a complex test problem as a problem in which cause and effect cannot be deduced in advance, only in retrospect (based on the Cynefin framework). For example if you make changes to some core system and it could potentially affect the entire application and it is very hard to know where, even if you have some information of dependencies and where it could impact. Since many of the systems we work with are more or less complex I think we are faced with these types of complex test problems more often than not, to different degrees of complexity.
I used the power of AI to create nice pictures for each approach which is why I only finished it recently
How do you usually approach these types of complex test problems? Sometimes you have more or less information when you start, and that of course affects your approaches as well. But any thoughts or ideas would be interesting.
I’m a tailgater/ambulance chaser. I suspect/think a miss-conception with complicated software or any systems, is that they are complex for complexities sake. It’s too easy to think that as a tester, that api integration test suite you have been asked to write HAS to be as complex as the problem the api solves. It often does not, sometimes all it has to solve is the ‘history’ problem. Just the integration part, nothing else, not the security part, not the functionality part, just the part that snarls up your SDLC. I prefer to look at Cynefin as ending it’s ‘loop’ at a imaginary non-chaotic point (or at least less chaotic), and then extracting , almost looking backwards from there towards the business goals that get us to that final point.
The tailgater (or ambulance chaser if you like) basically runs with whatever is most interesting and energetic. It’s often going to be the source of the most change in the system anyway, but also the source of knowledge, because it will be visiting all of the most volatile spots in the system. Parts of the system it does not visit are probably not experiencing churn and are thus lower in defect density anyway. By tailgating the lead energy in a system, you allow it to move more quickly because it does not have to keep on looking back, your job becomes detecting regressions. Stop following at some point when you have enough valuable areas to test, you will have covered a lot of ground and added test value if you have used the journey as a way to grasp the business requirements better. I don’t use this often, but have found it’s a great experiment when you think your tests are just not finding really ‘useful’ bugs.
Very thoughtful. Something like a meyers-briggs for QA
I like to explore. Im nosy. I want to know what happens when I do a thing and I want to follow it through the actions travels, noting other branches to explore later. Im not above injecting logging statements in code in order to get what I want.
But I dont just dive in. Experience has taught me to survey things first. gather information. break the application down into more bite sized pieces.
By building up knowledge of the system I am testing. Try not to take the easiest route while learning it, contacting people involved. The better I know the systems the easier to sort out what needs to be done when someone want to mess with them bigtime. And the systems include all key people involved in them.