It’s hard to define systems thinking because it’s absolutely everywhere and usually aligns with its purpose. So if you ask a stock trader you’ll get a lot of talk about financial systems and monetary feedback loops. Talk to an ecologist and you’ll hear about systems thinking errors in fixing agricultural issues because of unforeseen consequences in food webs. It’s everywhere because nearly everything can be thought of as a system. A set of elements that are in some way connected.
Systems thinking I see operating in two ways, generally speaking. The first is the concept of thinking of the whole as made up of parts rather than the parts constituting a whole. Synthesis over analysis, holism over reductionism; the combination and connection of things as a concept, above the definition of things as a collection of isolated parts.
The next is an attempt to derive knowledge about systems themselves, and the approaches we can use to do so. How to deal with medium number systems in a pragmatic way. For me, that’s general systems thinking. Metalaws. Being wrong in order to find that out.
I also think of it in terms of irritating complexity. A dynamic web of interconnections of parts that can affect many others simultaneously. Adjustments to a system based on linear functional thinking can have all sorts of consequences elsewhere.
Systems thinking is also about consideration of all the relationships between modelled elements in a system, and therefore all the ones excluded from that model. As we try to supress the complexity of systems so that we can map them and apply sensible calculations we exclude factors, and as those factors become detrimental to the accuracy of our results we accept the error bars as they come.
It’s also about having the perspective to understand what system could be. To have a broad enough scope to see competing systems or human-made paradigm categorisations and treat them with neutrality. Or at least not be so invested in one system as to blind ourselves to the realities of another. It’s removal of the self in service of the possibility of things that are not connected to us. It’s removing faith from our thinking.
Indeed, one point of application may just be in our fuller understanding of the lack of our own understanding. We cannot hope to know everything there is to know about everything in the systems in which our product is built or operates, so we use falliable, varied approaches. A good tester should be wrong many times a day.
Speaking as a generalist, that is.
As testers we often consider the dynamics of systems in order to broaden our observations. Testing is often concerned with inductive reasoning - we make observations and draw inference based on our understanding of the models we have of the systems we test. We go by what is, as best as we can. In order to be able to make those observations and have the models available to us to consider the risk of what we might see as a potential failure in a system we have to study the systems themselves - and interrogate the assumptions of their creation and question things that have been excluded or simply missed from them. It’s the part where you look at a flow diagram and ask “okay, what if I don’t do either of these two paths? What if I go and turn off the box this bit runs on?”.
I think that one of the most important things that systems thinking does for testing, and honestly for building software in general, is to remind us that we are not dealing with the pure methodological reductionism of classical mechanics, and we should not use a pure mathematical, algebraic mindset. The idea that if the axioms are in the right place the software must work. The thinking that leads to egoistic assumptions of correctness by failing to consider the wider context and social elements. Assuming determinism by breaking down software into formal logical parts while ignoring synergism, emergence and honestly most of the actual parts and every other system they interact with. The reason why people think testing can be automated. The failure that leads to phrases like “100% coverage”. Why BDD oversteps its boundaries.
Systems thinking isn’t applied in steps when it comes to software testing. It just helps us to improve our thinking and avoid big mistakes, and influences our tools and approaches as a result.