I’m looking to understand what ‘Release Testing’ looks like for you:
Is it a ‘big’ team event?
How long does it take?
– Is it limited?
What are you testing?
– Stories that were implemented?
– Critical workflows
How often does it happen?
Is it a distraction to ongoing work?
How do you measure success?
We have a couple of processes that are similar and different in their own ways, it is a mixture of automation and manual/exploratory testing. We look to time box this to a 2 hour window allowing enough time to run the automation suite and look through any possible issues whilst carrying out manual testing on high value features that currently cannot be automated.
The focus is on regression and critical workflow with some exploratory, trying to avoid a complete retest of the new features that have been added.
At a previous employer we had release testing which consisted of the critical workflows and new features that either will not or have not been automated yet.
Is it a ‘big’ team event? - We had a very large system and tight window to complete, so the only way we could get it done was with a whole team effort. However as time went on and this become more automated we got it down to one FTE for the day + automation.
How long does it take? - We had a week (5 days) to complete testing and investigate issues (Like I said large system).
What are you testing? - We did both testing of new features and Critical workflows, on occasion there would be something else thrown into the mix depending on the release requirements.
How often does it happen? - Every release. We also had smoke tests for minor maintenance updates or hotfixes.
Is it a distraction to ongoing work? - It can be but I found giving plenty of notice and making the whole feature team aware it was not generally an issue. Release took higher priority than WIP. We also took the saying its a team effort seriously and if the teams tester is on release testing the devs would execute the testing with the tester reviewing after release testing is complete.
How do you measure success? - Every company is slightly different but we had a set of entry and exit criteria, a breach in any of these would result in a NOGO decision for the release. This consisted of things like >95% pass rate, 0 high priority issues, etc. You would need to work out what success looks like for you.
I just died a little inside as I despise our release testing process. In fact despite is probably too gentle of a term.
My work, in theory, has releases ever 3 months. For each release we have a period known as hardening, which effectively release testing. There’s a team that run the same few hundred test cases every single release for every feature released prior to I think 2023 (Sigh). Development teams are then responsible for testing all features added after then. Some teams will do lots of testing. We do enough to ensure our previous features aren’t borked by another team and that our features that we’ve heavily tested still exist. The whole process tends to take anything from 6 to 18 weeks.
It’s a pain in the backside and a distraction for 3 of us (out of ~25 folk) in my group of teams. We only spend about 2-3 days per release cycle but there’s so many bugs across the wider org that whilst we’ll test Release Candidate 1, it’s usually a few more RCs later before we can release. Amazingly we can have two of these release cycles on the go at once, so bad are we at delivering with quality.
In terms of assessing the value. Well if you’re finding showstopper issues it’s worthwhile.
Side note: our industry is “different”… Customers don’t want frequent releases.
As a better example, my previous company before we were bought over would have a 4 month release cadence (same industry) but we’d be slightly more “all hands on deck”. A team of 4 would get everything done within a few days. This took some getting to as we kept refining and cutting the fat from our release testing. I was quite proud of cutting our testing from 2-3 weeks for 6+ folk to a smaller team for a couple of days.
Basically our goal and success was measured by proving there were no showstoppers. Validated by lack of escalations.
We usually time-box it to about 1.5–2 hours. It’s not meant to block the whole team, but enough time to:
Manually check the critical workflows - login, payments, dashboards, etc.
Do a bit of targeted exploratory testing especially on areas that recently changed or can’t be automated.
We don’t try to retest every new feature again that’s mostly handled earlier. The focus here is: “Is everything still working together?” More of a final “sanity + confidence” check than a deep dive.
It’s not a big team event, but anyone can jump in. Usually just the QA, dev involved in the change, and sometimes PM if they want to give it a last look.
We do this for every release, and try to keep it lightweight so it doesn’t feel like a drag.
We’re on that same journey. All our products have some link so the biggest test problem is validating there is no knock effect to changes. We were also in the 2-3 weeks bracket and have managed to get most products down to a week with a smaller team. However, one product at the beginning of our data flow is proving more of challenge
Bit of a context: At my current role: I have managed to established a “Quality Roadmap” which clearly stats what needs to be done, who will be doing it and when it needs to be done in Dev env, in Staging and in Production environment once the release is deployed.