Whatever shape your Software Development Lifecycle takes, it’s a cycle that keeps on going round. Features, bug fixes, paying down tech debt, whatever the change to the system is, there is always some new code to introduce.
As a product grows, be it a monolith or a collection of interconnected systems, typically changes continue to be introduced, and the list of expected behaviours grow.
All these behaviours need to be developed, tested, monitored, maintained, and no doubt tested again and again. Over time, this means the capacity of your teams can be reduced, as the burden of carrying the ever more complex system builds on top of what came before.
So, how do you handle making sure features become dependable enough to be left out there in production, minding their own business, without needing to keep adding more and more capacity to look after them? How do you ever find capacity to add new features, without neglecting the careful maintenance of existing high value features, after all those are the ones making you money!
While I’ve got some ideas, and I’ve seen a few different approaches, I really want to hear about your experiences on this one. Not theory, not what a book says, but what you’ve personally experienced.
Has regression testing or monitoring become overwhelming?
Maybe you have novel method for removing cruft from your test suite as you go, or even murcaily remove underused features?
Whatever your answer, let me know! Let’s talk more about software after it’s first put into prod.