Does anyone have experience with residuality theory?

I’ve been a bit aware of residuality theory for a while, and a video has helped it click recently. However, there’s often a gap between an interesting video and changing how you work day-to-day. So I wondered if anyone here had any experience of using it.

In case you haven’t come across it, it’s an approach to doing architecture. It’s backed by academic research called complexity science, but applied to software.

You think of as many stressors to the system as you can. These can be things like: a competitor drops their price, a server crashes, a giant fire-breathing lizard destroys a city etc. You see how the system would cope or break.

You choose which stressors you want to cope with and hence change the architecture accordingly. The pattern of how parts of the system cope or break helps expose hidden coupling between the parts, in a way that’s strongly related to non-functional requirements. I.e. it helps with identifying NFRs and the system’s response to those.

This all reminded me of the shift left testing, risk storming, what if analysis etc. that testers often do. So I thought MoT would be a good place to ask about this.

The video, in case you’re interested: https://youtu.be/0wcUG2EV-7E?si=SDSlMiQi-Isuldw9

2 Likes

This got me thinking about the testing we do before the software gets to the customer, like trying to find bugs early (shift-left testing) and brainstorming risks (risk storming). For example, we often test our software to see how it performs under heavy traffic or if it keeps working even when parts of it fail. Residuality theory seems to suggest a similar but broader approach, where we think up as many challenges as possible to test our software against, helping us make it even more reliable and ready for real-world use.
While I kinda got the theory behind this approach and saw its potential value in making apps more resilient and reliable, I haven’t had the chance to apply it in practice yet :sweat_smile: :smiley: It’s one thing to understand the concept and another to integrate it into our day-to-day testing. I’m curious to explore how we can use this approach to enhance current testing methodologies :slight_smile:

I’m in a similar position. It sounds like a good idea, but unless I can make it change my day-to-day work it will remain just a good idea.

I sent the link to someone I know online who’s a resilience expert for Big Tech in the USA. He likes the emphasis on what could go wrong, but thinks the overall approach is simplistic and won’t work. It might be that it won’t work if you’re e.g. Facebook scale, but few people are.

I also had a conversation with Barry O’Reilly online, which included me asking if he’d had much interest from the testing community because they seemed like a good audience. His response was that testers are good sources of stressors, but you also need people with a code focus (programmers, architects etc.) who can go beyond that to the refactoring etc. that stressors should lead to.

I also asked him about the stressor generation and selection process (you don’t automatically respond to all stressors - you need to pick which ones you’ll deal with). Re. Facebook scale that I mentioned earlier - it’s easy to see someone put in a stressor like: we get 1000 times the current number of users. This might happen, but it might not, and if it doesn’t you could have over-engineered (which costs in terms of complexity, development speed etc.) His comment on this is that you can add another stressor: we don’t grow our user base at all.

I made another comment that you could get people trying to slip their pet project / feature into the process in disguise via a set of stressors that just happen to deliver that feature as a response. His reply was that people acting in self-interest rather than the wider interest is a general problem in software development, and this is just another example of that. (I.e. we should already be on guard to it.)

I think it’s something to chew over, but haven’t got anywhere concrete yet.

1 Like