Iâm not suggesting having the developers do all forms of testing. Iâm also not suggesting that they learn how to tell if something is of value to the customer (although, that is something they should be concerned with, as mentioned as the fifth ideal in The Unicorn Project). They donât need to know how to do UI/UX design, or set up a k8s cluster (although Spotify has a culture of making it easier for developers to do things like deploy an environment themselves).
Iâm just suggesting that they write the automated checks for the acceptance criteria of each ticket they take on, whether that be done at the unit, component, integration, end-to-end, or UI level. Iâm suggesting that they figure out whether or not they built what was asked of them through automated checks. That is not all there is to testing, though (but thatâs another conversation).
Writing those checks themselves, at all relevant levels, is essential for maintaining internal software quality, and is a very effective way of designing the code and systems in and of itself. If they donât do it, the internal quality will suffer, which will inevitably slow down development by creating even more headaches from messy code or problematic system design.
(Iâd put a link to Martin Fowlerâs post on âIs Quality Worth It?â here, but can only include 2 links)
The developers are already thinking about how to verify their work as theyâre writing it anyway. They ask most of the same questions testers might be asking to verify a ticketâs requirements (at least at face value). They need to be asking those questions in order to figure out what the code and systems should be. They would only need to write those questions down as automated checks. The difference though, is that this would be very easy and quick to do with well designed code and systems, as most of those questions can be broken down into very atomic questions that can be handled at the unit/component level, with very few needing to be done at higher level scopes.
When the developers are responsible for writing those checks, they will find the motivation to write them and most of the headaches tend to go away. They will invest in making the code and systems be better designed so that writing the checks becomes easier (at least, if given the time by management). When they do this, velocity improves because working with the code and systems becomes easier.
The testers donât become unnecessary when this happens. Theyâre just freed up to focus on more hidden bugs, or other threats to the value of the product (or even opportunities for value to be added).
This is by no means a fantasy, and if youâre curious to see an example of this in action, Atlassian has some nice write-ups about why they operate this way, how they transitioned, and what role testers play in their current development practices.
Iâm also not trying to say that this is the one true cause of burnout. But because testers have little or no power in a waterfall system, and have to do work that is usually more time consuming then either the development or planning, they will inevitably be forced to make hard decisions about what they have time to verify or how well they verify things. They will also be highly encouraged to work overtime, or through meetings, or on their time off, because if something gets passed them and into production, the question asked is often âhow did this get passed QA?â which implies that the testers are at fault.
Of course, if management doesnât want to allow this to happen, then it wonât. But if it doesnât happen, itâs unlikely burnout will ever be avoided, because there will always be an unfair imbalance in power that discourages those downstream from telling those upstream ânoâ.