Test case statuses Passed, Skipped, Blocked, Retest, Failed, Not run

I have a test run and in some test cases which failed it is difficult to decide if a test case should be labeled with status Failed, Blocked, Retest.

Other difficult case is a test case which is a part of run test but should be not ( feature was deleted ), should I chose Not Run, Skipped or delete it from Test Run.

Another one is when a feature in an app changed but a test case not but is included in a test run. Should I remove it or update the test case?

I would like to know of your strategy about choosing a good status for a test cases and specially those which are more complex to decide.

3 Likes

Hi @adobes

I am assuming that you are working with a test case management tool.
Are you constrained to using the listed status labels? Or can you create new status labels? If the meaning isn’t clear then you could try change them.

Looking at the label names I am assuming you could use these and not need any more labels:

  • Passed - test passed (obviously)
  • Failed - test failed for some reason
  • Blocked - You could not run that test case. Something has prevented it. E.g. test region / data issue
  • Not run - Its a test you want to run, but it has not been run yet

I don’t think you want your “test run” including tests you don’t intend to run.

If tests are no longer relevant I would delete them. Or if you need to keep the information for some reason, set them as “archived” status, or move to another folder.

3 Likes

If you have to run a test which looks like is no longer relevant then you change it’s status to 'Skipped`, maybe you have a custom one like “Need investigation” or you pause the test run and investigate the test?

Can you share your test statuses which you use when doing manual testing?

1 Like

In that case makes sense to me to skip it. Can resume of needs be.

I don’t really use status that much but skipped and these 4 is what I’d start with if i needed to.

Any test management system that does not let you define lots of states is never going to get my vote, even though, like @azza554 , I prefer minimal too. (Management tools that only allow a small set of states, often have rigid workflows that may cramp your process style later on.)

  • Pass/Fail : In an ideal world, tests either pass for fail, there is no in between.
  • Skip : This creates 2 problems, it ignores time, which is where skipping of tests, and needing to make test-code , documentation changes, or make environment changes before a test can be run hide Debt. “Not-run” is not a valid exit state or result, for me it falls into Unknown, because it got skipped. It’s valid to say you don’t want to run this test now - for example it’s an expensive full-regression case, so technically you are skipping it.
  • Not Yet Run : I do plan to run it, or to skip it. This is the initial state of all tests.

When I look at test states, I am thinking of 2 things, Testing is fundamentally about what we know, and what we don’t know, and the other thing is what to do with failures. We will always be happy with not knowing a few things, I put these into my “skipped” (unknown) bucket, and that leaves me with Passes and Fails. Nobody cares about tests that Pass, which means I want to be running tests that will Fail as early as possible. I get left with
P=pass
F=fail
S=Skipped

Lately I’m behind on automating, so I do a lot of manual testing, at the end of any release, I basically start in the area in my list where the last release found bugs, I explicitly mark things as skipped as I go if they will take too long, or if there is high confidence they did not break. When it gets to the end of the day, I mark anything I did not test as skipped. I don’t like to leave blanks for tests I did not run. When someone else reads your test report, there will not be enough space to explain why you did not run lots of tests anyway. (That’s another topic altogether.)

  • Retest : This is a test case which has not been run in the intended environment (maybe the developers gave you a new build.) And I’m going to optimise a bit here, don’t retest code that has not changed. Restest a thing only if it’s in the “headline”.
    Being able to easily and intuitively order things so that test cases that cover features touched in a release first can be run often enough to give you confidence. I tend to run test cases that are likely to have been broken by a release twice anyway, it’s a comfort thing.

The goal for me is to find bugs, and the extra amount of cognitive time spent by having 5 nice test states instead of just 3 or 4, takes time away from me finding a bug. As Aaron points out , don’t let the tool get in your way. And finally, thank you for a brilliant question Blazej .

2 Likes