When do you know testing is completed or done for a feature or release or a project?

How do you respond when someone ask you "have you completed on feature ###?

Or "have you completed testing for project ***?


I know when it is completed when the acceptance criteria has been met, the regression scripts are ready (if they need updating), accessibility requirements are met, any security testing has been done, cross-browser tested etc.

I have never done a written checklist but I have a mental one depending on the ticket itself.


Often asked :slight_smile:
For me: It’s done when I have a plan that I executed on; stats and specs are less important than questions like “have the bugs we said we fixed been fixed?”. Does the new feature happy-path work?, did the automation for the feature run, stability is less a concern. Lastly, have I had time to look for side-effects around the parts of the system the devs have let me know about possible impacted, in the run-up to release.

And only if I have described what I have checked, to stakeholders and they are happy, then we can move to release. Remember, major releases do need retrospectives, if you ARE improving your own process between each major release, you are going to have a better release than the last one.


I’d be careful to specify the extent of testing in any answer. We can’t exhaustively test for every outcome, we can make our best judgement.

Honestly my knee-jerk answer to this is “when it’s been in production for 6 months with no customer bugs raised”.


For me the best answer to this question came from a company that creates airplanes. And the answer is specifically their approach to risk based testing. This is a long one so hold on. First of when they introduce a feature to be done they defined the hazards. Basically each feature is a hazard. The example I was given was the one of a razor blade used for shaving. That’s the hazard. Then they identify an outcome. Like cutting myself when I shave. That is medium probability and low impact. Versus me cutting an artery when I shave. That is low probability and high impact. Each of these combinations are the risks.
So each feature is a hazard, each combination of probability and impact are the risks. This was a team effort coming up with the risks.
Then they had categories for each risk zone. In a simplified way they would specify high risk to do for instance. Simulations, Boundry / Domain Testing, Negative Testing and Happy Path Testing and Alternative Paths Testing. Medium risk might be Happy Path and Domain Testing. And for Low risk they might do Smoke Testing only.

So in essence the risk defined the test scope. So they could simply answer which of the defined tests they have done and which they have left every time anyone asks.

Exactly which test techniques you need to employ for the different risk groups is as most thing heavily dependent on your context, but the principle of having a test strategy that says. Setting up a strategy where this kind of thing (high risk) will be tested using these techniques can help you answer the question.

Good Luck


Thank @ola.sundin for sharing this …

How did stakeholders respond to this? :sweat_smile:

Did you also share what your planed tests? Did they agree with the planned tests?

1 Like

Is that when your confidence level in the product quality is good enough?

1 Like

It really depends on context but in short yes. If I know all they are doing is changing the alignment of an icon and it has gone through code review and cross-browser testing that’s different to say a feature rewrite.

When it comes to large features or areas that touch on other places then I gather evidence of it working e.g accessibility matrix or AC’s to support my sign off. It is also when my confidence is lower (naturally) so having the evidence also means that I know myself where things are at.

To be honest, I am not the decision maker on something being ready to release/testing being completed. That is the responsibility of my senior manager. (Not trying to shift “blame”, it’s just the way it is)

1 Like

For this answer, I will refer to [this blog by Michael Bolton].(https://www.developsense.com/blog/2018/02/how-is-the-testing-going/)

The short version is, I try to tell stories about the product, about how I tested the product, and the quality of the testing.

If I am satisfied with the stories, then the answer is, “For now, my part is done.” This almost never happens. It’s probabaly my fault, I’m somewhat of a perfectionist.

If I’m not satisfied with the answers, but someone who makes decisions about my work is satisfied with the answers, then the answer is, “I’m not totally happy with it, but for now, my part is done.” This happens more frequently.

If nobody is satisfied with the answers, then the answer is, "{This part of the story} needs work, but {other part of the story} looks pretty good. Or maybe the answer is some variation of that.

I think it is important to mention, the ACTUAL quality of the product is not related to my answers to the question. If I started the thing I’m testing once, and it crashed hard, then the quality of the product is bad, the testing story is very short, and the quality of the testing is only related to that one test. But, the answer is still, “For now, I’m done.” So the phrase “For now,” in this case, is vitally important. I will usually supplement the statement with a “But I’m ready to continue if…” and list my conditions for continuing testing.

1 Like

This has differed in every single place I have worked, maybe due to experience or context.

Currently, if I’m the only one doing the testing, then any questions might come back to me. I don’t want to be holding the can; so I always will come with a list to the sprint planning meeting.

  • “resource plan” (who or what will be doing testing) and exactly which build is under test
  • Any automation job, we are relying on to support our release (incidentally the test-job for one of the platforms is offline right now)
  • List all new features I know about, “if it’s not in my list its not getting tested”.
  • Warn of any risks, or features you don’t understand. Even if they make you sound like a woosie. If you raise concerns in sprint planning, they are more likely to get actioned.

You have to be able to articulate this in a standup format meeting, it’s hard, but if you cannot, then you are doing more than one sprint worth of work. If I have to wait for a late bugfix still, I let people know, that coverage will be reduced for example. If you said it in a formal meeting, no need to write it down really. This above format means I don’t have to tell any kind of story (I like Michael Boltons post, here^^ it is really good). Its also good to real JBGE http://agilemodeling.com/essays/barelyGoodEnough.html .


when the the acceptance criteria has been met

1 Like

Luke, I start every testing project by providing an estimation, like in 5 days or in 10 days. So when anyone ask me about testing status, I just reply with remaining of time of taken estimation, like if 5 of 10 days have been tested then 50 % testing is done.

1 Like

I’d respond with ‘It depends’.

  • Is the development finished? If not - testing isn’t finished either;
  • Are there outstanding risks I haven’t tested for? If yes, are those risks agreed with the Stakeholders/Release manager?
  • Is the project/feature still going to be developed or was it frozen/postponed? If frozen - my testing might be finished;
  • Who’s asking and what does s/he expect? We know that ‘testing’ can mean different things for many people/managers. One can expect: documentation, big reports, scripts, artifacts stored, presentation, release done or release notes, a simple sentence from the tester,…
  • Is the project still funded/maintained, is there time still left for me?
  • Do I have other priorities as well. Work on other projects?
  • And so on…

Thanks for the link. Like this concept. And this line “The secret is to learn how to detect when you’ve reached the point of something being just barely good enough and then to stop working on it. Easy to say, hard to do.”


“The more I learn, the more I realize how much I don’t know.” Einstein

If by “completed or done” you mean “there is nothing else to test”, I would say this point can never be reached.

If you mean “it’s probably good to go”, I would say:

  • The known risks* introduced by the diff were deeply investigated or the strategy to investigate in production is ready;
  • The previously known behaviors were checked (which is a no-brainer, given you can make a computer do this work);
  • A rollback strategy seems to be ready to be fired if necessary.

* New behavior, regulations, etc.

1 Like

When you can tell the team it’s ready for release AND be able to sleep that night.