How do you define a release blocker?

Hi, this is my first question here and I know the title is very broad so allow me to give some context.

Iā€™m currently trying to establish a clear list of criteria that define a release blocker bug.

Iā€™m working in a company that creates mobile apps and as a result releases tend to take some time, at least getting a build to our users goes through all the review and roll out processes that are typical for mobile app stores. This means we donā€™t want to slow it down even more when pushing out new features.

We do a full release test as part of a release cycle, we have those tests prioritised from low, medium, high to critical. The critical tests are also our smoke tests that we run for hotfixes or whenever we think needed.
If a critical test breaks, obviously this halts a release.

  • How would you go about labeling bugs found during manual release testing but that donā€™t break a scripted test scenario?

  • How strict does a release blocker process need to be?

  • What if a medium or low priority test breaks in an area in the app that has nothing to do with the feature you want to get out?

  • What with new defects that are no regressionā€¦ but have a low severity?

I know these questions can have several answers depending on the set-up or organization but Iā€™m looking for best practices here that I can tweak or fully adopt into our process. I look forward hearing your thoughts on this!

5 Likes

Always? What if the automation code is wrong and a human observes the product to be good enough?

Answer for both:
A bug is a bug, not matter how it was discovered.

I can not really think of a hard process, but talking to people.
Finally its up to the project manager, or people with similar responsibility to decide if the bugs are blocking the release.
I would show them a list of the bugs, telling them about what you find and what you think the impact is. Then they should decide on the business impact.
Also donā€™t to it at the very last minute, but somehow frequently.

A process is only a tool for people to communicate. It should not hinder, but enable them.
If the process hinders communication, you should drop it (or not install it at all).
Talk to each other. Report frequently about the status of the product and testing in general.

4 Likes

Personally, I do not completely understand what are you looking for, for some very general stuff that can be easily googled, am I right?

1 - These are bugs that stop the app from working right, affect security, or break important rules. Blockers are anything that stops main features/usersā€™ actions, breaks big app parts, or risks user data.
2 - If you find a bug thatā€™s not in the test scripts but makes key parts of the app unusable or really bad - itā€™s a blocker, especially if thereā€™s no easy fix.
3 - If these tests are not about the new feature, check if they still matter. If they make the app look bad or confuse users, they might still need fixing before release as blockers.
4- If these bugs donā€™t really impact users much, they can wait. But if they might affect how many users see and use the app, itā€™s better to fix them first.

Defining a release blocker is different for each team and situation., it depends on the appā€™s goals, the users affected, quality standards and approaches, release policies, timing, etc. Even in the same company, the same bug could be a blocker in one release but not in another, depending on many things actually.

1 Like

It sounds like youā€™re in a situation that requires balancing quality with the need to get releases out on time, which is pretty common in mobile app development. Here are some thoughts based on best practices Iā€™ve seen:

  1. Labeling Bugs That Donā€™t Break Scripted Tests: Bugs that arenā€™t covered by scripted tests but could impact the user experience should be evaluated based on their potential impact. Even if they donā€™t break a critical test, they could still warrant a deeper look. Iā€™d recommend defining a criteria checklist ā€“ think about the affected area, its user visibility, and the likelihood of occurrence. This can help you decide if a bug rises to the level of a release blocker.

  2. Strictness of Release Blocker Criteria: The process for defining a release blocker should be clear but flexible. Generally, a bug is a release blocker if it significantly impacts core functionality, user experience, or results in data loss/security risks. Establishing clear impact categories (e.g., showstopper vs. degradation) and thresholds for each can help the team make consistent decisions while being adaptable as needed.

  3. Medium/Low Priority Failures in Unrelated Areas: If a medium or low priority test fails in an area unrelated to the new feature, consider its user impact. If the issue affects core app functionality or creates a bad user experience, it might still be a blocker. If itā€™s more of a minor inconvenience and unlikely to impact many users, documenting it for a future fix might suffice. Itā€™s about weighing risk versus release value.

  4. New Low Severity Defects (Non-Regression): Low severity defects that are not regressions are often better handled on a case-by-case basis. If itā€™s something minor that wonā€™t impact most users or core functionality, then itā€™s probably fine to proceed. However, if thereā€™s a cumulative effect of multiple low-severity issues, it could damage overall user perception. Having a tolerance threshold can help decide when enough is enough for these kinds of defects.

Ultimately, a good practice is to define these blockers collaboratively with stakeholders from QA, Product, and Dev teams. Itā€™s about ensuring everyone is aligned on what constitutes a ā€œmust fixā€ versus a ā€œcan wait.ā€ Also, revisiting your criteria after each release and tweaking based on user feedback can help keep the process effective and lean.

Hope this helps give you a starting point!

4 Likes

Thanks for the quick answers everyone!

@shad0wpuppet asks a valid question and I agree my title and question could have been more clear

Personally, I do not completely understand what are you looking for, for some very general stuff that can be easily googled, am I write?

Google, chatGPT, a ctrl+f through some old ISTQB course docs could indeed give me a list of dry and formal best practices (although a google search did not really give me anything useful yet) but what Iā€™m really looking for, I just realised, is opinions by people in the field, outside of my company.

I more or less know the opinions of the stakeholders in the company and on a high level it boils down to:

  • PMā€™s want the feature released asap
  • devs want to start the next new feature asap instead of fixing bugs
  • QAā€™s want the app to be stable and high quality all the time

So, how do you create a list of criteria that will serve as a go-to ā€˜ruleā€™ in a process with stakeholders with all their different expectations and how do you do it in your company/client?

Also, Iā€™m new here so I donā€™t know how to interpret the emoji game or any inside jokes. Please elaborate if you mock someones answers with an emoji. Iā€™m here to learn.

How do you define a release blocker?
Something or someone that denies the release manager the right to release.

Iā€™m currently trying to establish a clear list of criteria that define a release blocker bug.
Why? Are you a release manager? Or a product manager that can enforce the release of a product?
As a tester, I wouldnā€™t do such a thing. Iā€™d report and highlight bugs that are relevant to the release goal. As itā€™s been said - communication is key here.

How would you go about labeling bugs found during manual release testing but that donā€™t break a scripted test scenario?
I wouldnā€™t. The bug has other more interesting properties that could be considered: generalization, replication, externalization, isolation, maximization, advocacy.

How strict does a release blocker process need to be?
This is up to the business needs, management involvement in the decisions, marketing and sales, timing of releases, preferences of the product manager, and many more. And people rarely can stay objective about such things.

What if a medium or low priority test breaks in an area in the app that has nothing to do with the feature you want to get out?
If a ā€˜test breaksā€™ then itā€™s a bad test no? Or did you mean something else here?
A test to me is a question/experiment I ask of a product to find something new.

What with new defects that are no regressionā€¦ but have a low severity?
The tester does a first evaluation of the bug, then the product manager is informed. He takes the decision of what happens with it.

I would first ask for a promotion to manager(release, product, quality) - with power over product decisions, budget, resources,ā€¦etcā€¦
Then Iā€™d get involved in managing the product quality.

What position are you in at the moment? what are your responsibilities based on the job description/contract?

2 Likes

First of all, welcome!

If youā€™re making a release decision you have to do that with all the information you have to hand. Formal best practices may inform how you design your release process, but they donā€™t work all the time, because release depends on so much that they cannot predict. So I feel youā€™re right in that it depends on the organisation, but I feel like that goes so far as to make best practices low-to-zero value.

Itā€™ll depend on how the business makes money, what promises it makes to clients, who the users are, what the users value, how the teams work, what your tooling does, what your product does, and so on. Software development, and testing are both social activities and we look at not just the functionality of the software but who weā€™re going to annoy or what fines we may have to pay or what damage weā€™ll do to the perception of our brand. A safety critical system has different rules to an early access game. The process changes between corporate clients paying licences and free releases relying on selling user data. If you have a premium product itā€™ll take more of a hit from cosmetic issues than a cheap one. You may need to delay or release based on contractual obligations. We might release, knowing that we can patch quickly. We might be running system versions in parallel for uptime reasons and that gives us confidence to revert in an emergency. The list is so long that itā€™s impossible to say without living inside that system and learning about it.

Testing is really there to help us make an informed decision about release, but we canā€™t formalise the release because it doesnā€™t just depend on the problems we find in the product, but everything around the project and the company and the users too. Although some of it might be open to formalisation if you have a formal requirement, like a contract, or a law to adhere to.

If your tools tell you that you have a low-priority problem, and on investigation there is actually a problem, then the assumption is that that problem wonā€™t prevent release. That being said, it might be unusual in a way that causes further investigation. Perhaps it has never failed before, or itā€™s a symptom of a more systemic problem, or a symptom of a change in third-party software that introduces unexpected change risk. The automated checks are specific spot-checks of certain facts, but we canā€™t overstretch our assumptions on what that means in terms of coverage. They act like an engine light in a car, something we have to investigate to find out more, and then the result of that testing can help to better inform release. The inverse may be true, in that a critical failure we may suddenly decide doesnā€™t matter for some reason, perhaps because we are soon replacing functionality. I guess Iā€™m saying that testing is about gained and communicated knowledge. To blame a release decision on a computer report means that weā€™re trusting that report to be full and accurate (which is very rarely plausible), or weā€™re shifting blame onto it on purpose.

Additionally the thing that prevents a release is sometimes not one bug. Sometimes itā€™s one big problem, sometimes itā€™s a collection of smaller problems. The decision to not release might simply be that there are so many non-critical but prominent issues that it would make the business look incompetent or arrogant or untrustworthy to release. Games often have this problem, where a vat of small issues make the game look awful and eager journalists love to write about games that release full of bugs.

So, unfortunately, youā€™re stuck with a qualitative evaluation. Exploring the risks and feelings and impact until we can decide one way or another. We can use numbers to aid this feeling, though, and look at functionality by how often it is used, or by what number of users. We can weigh up the cost of delay versus a guess at the cost of lost users and support team calls and triaging reports.

Hopefully that was at least helpful in guiding towards some ideas about release, I know itā€™s not exactly what you were hoping for.

5 Likes

Some great answers, but @kinofrost input :

Pretty much points to one more criteria for classing a single discovery as a blocker, does it have a security implication. And that can mean 2 things; customer security and company security.

  1. Does the defect open your customers up in any way, to revenue impacts or losses? Hackers getting in etc. (breaking a customer promise) .
  2. Does the defect compromise company data to hackers or let them directly steal revenue? And by this Iā€™m also including things like does it allow customers unintended free access to resources.
2 Likes

Nicely developed blog post youā€™ve added.

This reminded me of the times I was under pressure with a fixed release date and testing 2-3 days before it. I was so anxious during my searches for risks and problems, trying to support the release decision of how much those critical problems that I found matter based on Analytics data, API traffic logs and application purchases from the past 2-4 weeks.
This point is very often underrated/overseen.

1 Like