To Fail or To Pass, That is the Question

If when validating a work item and one of its acceptance criteria fails, but after discussing this failed status with your Product Owner she decides to overlook for now this one acceptance criteria (because it is a “rare use case, after all”), what test status would you give this "failed but now dropped " test scenario / step: Failed, Blocked, or Untested?

I do not want to create a test report that has any red / failed status, given this particular circumstance. (Note that I have fully documented the test run with these details, for the sake of posterity – in the event this use case returns to us later in the form of a feature enhancement request or customer complaint.)


My solution was to mark the test step as “Failed”, but mark the test run as “Passed”; while including the PO’s commentary in the test run’s Actual Result section.


How would be “Accepted as failed”?

I try to avoid any test reporting with hard wired status.
It’s a limitation, which hides important information.
There are more status than any developer could predict.

e.g. “I found 3 bugs, had to work around 2, still have to investigate 1, and currently pause because of an interruption” :wink:


I think a lot depends on your corporate culture and the nature of the product under test.

If you are a small company and you have a good professional relationship with the PO; and/or if your product is not public-facing and its functions are not critical to life, limb or prosperity, then by all means mark the test run as Passed but with reservations, or with non-material failures (and, of course, record your findings and, if they’re not recorded anywhere else, the conversations with the PO, their decision not to address the failure before release, and any reasons they gave).

If, however:

  • you are a large organisation and these decisions are a long way down the food chain from senior management; or
  • your product is public facing, and failures would impact public health, safety, or their livelihoods or savings

then you must mark the test prominently as FAILED and make it clear what recommendations you gave the PO and that the decision to release was not based on your opinion. I know we’re not, as a testing community, in favour of testers “signing off” on products before release, but that’s not the way a lot of the public and the judiciary see it, and if the product is not, in terms of the testing, “fit for release”, then any blame (and I’m sorry to make it into a blame game) that may accrue for the consequences of that release shouldn’t come to rest on your shoulders. Talking to a hostile media corps should not be in any tester’s job description.


A chestnut of a question, and after a few years, I have to agree with @sebastian_solidwork , it’s not binary. Also with @robertday , how for me, the only interesting tests, are ones that failed. I’ve been trying to read a horror non-fiction work about a poorly executed IT project, it’s a big book and curiously most of the failures and reasons the system cost so much was down to humans not paying attention to testing. I’m about half way through only, because each page reveals how important communication is, but also that the testers, never ever want their test reports to become entered into a court case. That just should never happen.

There is no fail, there are only unknown product risks. A certain large company I worked many years at used a “passed with issues” overall status. Which was bizarre, and we gradually changed that messaging to “it failed, lets triage the big ones and release once they are fixed.” Failing tests are noise in a system and generate unnecessary interest, they often do need removing if they cannot be manually reproduced. One evil way of doing this is “ignoring” the test, and that bring in the 4th kind of test result, skipped. Skipped tests can live in your system for ages, and are more benign even than the failed test. Yes, skipped tests are part of your “unknown risk”!


Somehow I can agree on that.

If we found failures the status is Failed. The test result is as it is.
It’s something different when the PO decides to deliver despite that.

Your company should not mix test results with management / PO decisions.
Don’t fear to create test reports with red / failed.
Maybe your company needs in addition another column (or a new report) to show the POs decision about your result.


I absolutely love “accepted as failed”, @sebastian_solidwork. Thanks for this suggestion.


Never forget its counterpart “not accepted as passed”. :wink:
False positives and negatives.


If this scenario happened to me, I’d remove or amend the acceptance criteria. As clearly, it’s not valid, otherwise we’d immediately be fixing it. But I’d make sure that the PO understands this. If they want this to be addressed they need to make sure to add a ticket for that to happen. I’d be cautious of ‘overlook this for now’,

I’d be sure to share my narrative with the PO, explains anything I could foresee happens, knock on effects, and if they are still happy to not address it, I’m happy with that.

If you have to, I’d do what you’ve done and make sure this decision is documented on the ticket somewhere.

I probably wouldn’t keep a test case/scenario for it, I could maybe automate the scenario, s I’m told if this behaviour ever changes, on the basis it was accepted behaviour and if it changes it may no longer be. But I’d need more context to make that decision.


I’d mark it as failed, and keep the notes, and have it be red as that’s what happened. If a PM or someone else decides to go on because it’s not important, that’s fine, but the test results are that it failed.

This has the extra advantage of making sure this doesn’t get lost in the noise, and someone can triage in the future if this is an acceptance criteria that needs updating, or if there are bugs that need to be reported/fixed. Having it visible as a fail and then later flipping to a pass is also useful, as it would highlight which version/release fixed it.


I agree with:

At the company where I work, a fail is a fail and is expected to be marked as such. Is it the test that is in error or out of date? Is the acceptance criterion too strict? Is the failure a trivial matter that should not be a blocker? Could be, but if the test does not pass as written then fail it – and document your decision in a way that will be helpful to engineers, fellow QAers, and the product manager. Most of our failed tests do not block the release, nor should they. YMMV of course.


Or, “accepted as blocked” per PO.

1 Like