First of all, welcome!
If you’re making a release decision you have to do that with all the information you have to hand. Formal best practices may inform how you design your release process, but they don’t work all the time, because release depends on so much that they cannot predict. So I feel you’re right in that it depends on the organisation, but I feel like that goes so far as to make best practices low-to-zero value.
It’ll depend on how the business makes money, what promises it makes to clients, who the users are, what the users value, how the teams work, what your tooling does, what your product does, and so on. Software development, and testing are both social activities and we look at not just the functionality of the software but who we’re going to annoy or what fines we may have to pay or what damage we’ll do to the perception of our brand. A safety critical system has different rules to an early access game. The process changes between corporate clients paying licences and free releases relying on selling user data. If you have a premium product it’ll take more of a hit from cosmetic issues than a cheap one. You may need to delay or release based on contractual obligations. We might release, knowing that we can patch quickly. We might be running system versions in parallel for uptime reasons and that gives us confidence to revert in an emergency. The list is so long that it’s impossible to say without living inside that system and learning about it.
Testing is really there to help us make an informed decision about release, but we can’t formalise the release because it doesn’t just depend on the problems we find in the product, but everything around the project and the company and the users too. Although some of it might be open to formalisation if you have a formal requirement, like a contract, or a law to adhere to.
If your tools tell you that you have a low-priority problem, and on investigation there is actually a problem, then the assumption is that that problem won’t prevent release. That being said, it might be unusual in a way that causes further investigation. Perhaps it has never failed before, or it’s a symptom of a more systemic problem, or a symptom of a change in third-party software that introduces unexpected change risk. The automated checks are specific spot-checks of certain facts, but we can’t overstretch our assumptions on what that means in terms of coverage. They act like an engine light in a car, something we have to investigate to find out more, and then the result of that testing can help to better inform release. The inverse may be true, in that a critical failure we may suddenly decide doesn’t matter for some reason, perhaps because we are soon replacing functionality. I guess I’m saying that testing is about gained and communicated knowledge. To blame a release decision on a computer report means that we’re trusting that report to be full and accurate (which is very rarely plausible), or we’re shifting blame onto it on purpose.
Additionally the thing that prevents a release is sometimes not one bug. Sometimes it’s one big problem, sometimes it’s a collection of smaller problems. The decision to not release might simply be that there are so many non-critical but prominent issues that it would make the business look incompetent or arrogant or untrustworthy to release. Games often have this problem, where a vat of small issues make the game look awful and eager journalists love to write about games that release full of bugs.
So, unfortunately, you’re stuck with a qualitative evaluation. Exploring the risks and feelings and impact until we can decide one way or another. We can use numbers to aid this feeling, though, and look at functionality by how often it is used, or by what number of users. We can weigh up the cost of delay versus a guess at the cost of lost users and support team calls and triaging reports.
Hopefully that was at least helpful in guiding towards some ideas about release, I know it’s not exactly what you were hoping for.