"All tests are green" is damend!

Continuing the discussion from Being bold and sharing an opinion:

What is when you deliver with known bugs?

Especial when we talk talk about test code, automated checks (which is unclear to me if your refere to that):
How do you get sure that coded expectation are not outdated?
What is when you know that the test code checks the wrong things, but you as human asses the product to be right?
I see false-positives happen as well as false-negatives.

I agree on that core functions should work.
In my understanding “all tests” just not apply to core functions / these tests can also include checks of details which you do not mind to broken for that release.

As I said, anything non-critical - so you deliver with no critical bugs, but lesser bugs are “known issues” and your team is (hopefully) committed to fixing them as time permits.

Especial when we talk talk about test code, automated checks (which is unclear to me if your refere to that):
How do you get sure that coded expectation are not outdated?

The way I do it is to check whenever there is a conflict. If the tests are failing but the devs and project management agree that the way the code is working is what should be happening, it’s time to update the tests.

What is when you know that the test code checks the wrong things, but you as human asses the product to be right?

That would be a case of a false-negative, possibly because the chosen proxy for the actual functioning isn’t working as well as old-fashioned (yes, that’s meant to be ironic) manual testing.

I see false-positives happen as well as false-negatives.

Absolutely. Automation isn’t and can’t be a replacement for actually using the software the way users will (and users are infinitely capable of doing things developers would never think to do, so the software is in turn infinitely capable of doing things users will think of as bugs).

I agree on that core functions should work.
In my understanding “all tests” just not apply to core functions / these tests can also include checks of details which you do not mind to broken for that release.

Exactly! “All tests” means All tests, including the ones that cover nasty edge cases which might not be important for this particular release, and the flaky ones that someone is still trying to fix but which uncover useful information regardless, and that obscure feature that only one customer ever uses, and then only once in six months but if you break it you’ll hear all about it, and… You get the idea.

That’s why I like to have tests prioritized so that the 20% of the software which gets 80% or more of the use is solid. I will admit to wishing I could get to a position where I can have tests to check for cosmetic issues and prioritize them as low priority - but getting that 20% is my first priority for automation.