When do you know testing is completed or done for a feature or release or a project?

How do you respond when someone ask you "have you completed on feature ###?

Or "have you completed testing for project ***?

2 Likes

I know when it is completed when the acceptance criteria has been met, the regression scripts are ready (if they need updating), accessibility requirements are met, any security testing has been done, cross-browser tested etc.

I have never done a written checklist but I have a mental one depending on the ticket itself.

3 Likes

Often asked :slight_smile:
For me: Itā€™s done when I have a plan that I executed on; stats and specs are less important than questions like ā€œhave the bugs we said we fixed been fixed?ā€. Does the new feature happy-path work?, did the automation for the feature run, stability is less a concern. Lastly, have I had time to look for side-effects around the parts of the system the devs have let me know about possible impacted, in the run-up to release.

And only if I have described what I have checked, to stakeholders and they are happy, then we can move to release. Remember, major releases do need retrospectives, if you ARE improving your own process between each major release, you are going to have a better release than the last one.

4 Likes

Iā€™d be careful to specify the extent of testing in any answer. We canā€™t exhaustively test for every outcome, we can make our best judgement.

Honestly my knee-jerk answer to this is ā€œwhen itā€™s been in production for 6 months with no customer bugs raisedā€.

3 Likes

For me the best answer to this question came from a company that creates airplanes. And the answer is specifically their approach to risk based testing. This is a long one so hold on. First of when they introduce a feature to be done they defined the hazards. Basically each feature is a hazard. The example I was given was the one of a razor blade used for shaving. Thatā€™s the hazard. Then they identify an outcome. Like cutting myself when I shave. That is medium probability and low impact. Versus me cutting an artery when I shave. That is low probability and high impact. Each of these combinations are the risks.
So each feature is a hazard, each combination of probability and impact are the risks. This was a team effort coming up with the risks.
Then they had categories for each risk zone. In a simplified way they would specify high risk to do for instance. Simulations, Boundry / Domain Testing, Negative Testing and Happy Path Testing and Alternative Paths Testing. Medium risk might be Happy Path and Domain Testing. And for Low risk they might do Smoke Testing only.

So in essence the risk defined the test scope. So they could simply answer which of the defined tests they have done and which they have left every time anyone asks.

Exactly which test techniques you need to employ for the different risk groups is as most thing heavily dependent on your context, but the principle of having a test strategy that says. Setting up a strategy where this kind of thing (high risk) will be tested using these techniques can help you answer the question.

Good Luck

7 Likes

Thank @ola.sundin for sharing this ā€¦

How did stakeholders respond to this? :sweat_smile:

Did you also share what your planed tests? Did they agree with the planned tests?

1 Like

Is that when your confidence level in the product quality is good enough?

1 Like

It really depends on context but in short yes. If I know all they are doing is changing the alignment of an icon and it has gone through code review and cross-browser testing thatā€™s different to say a feature rewrite.

When it comes to large features or areas that touch on other places then I gather evidence of it working e.g accessibility matrix or ACā€™s to support my sign off. It is also when my confidence is lower (naturally) so having the evidence also means that I know myself where things are at.

To be honest, I am not the decision maker on something being ready to release/testing being completed. That is the responsibility of my senior manager. (Not trying to shift ā€œblameā€, itā€™s just the way it is)

1 Like

For this answer, I will refer to [this blog by Michael Bolton].(https://www.developsense.com/blog/2018/02/how-is-the-testing-going/)

The short version is, I try to tell stories about the product, about how I tested the product, and the quality of the testing.

If I am satisfied with the stories, then the answer is, ā€œFor now, my part is done.ā€ This almost never happens. Itā€™s probabaly my fault, Iā€™m somewhat of a perfectionist.

If Iā€™m not satisfied with the answers, but someone who makes decisions about my work is satisfied with the answers, then the answer is, ā€œIā€™m not totally happy with it, but for now, my part is done.ā€ This happens more frequently.

If nobody is satisfied with the answers, then the answer is, "{This part of the story} needs work, but {other part of the story} looks pretty good. Or maybe the answer is some variation of that.

I think it is important to mention, the ACTUAL quality of the product is not related to my answers to the question. If I started the thing Iā€™m testing once, and it crashed hard, then the quality of the product is bad, the testing story is very short, and the quality of the testing is only related to that one test. But, the answer is still, ā€œFor now, Iā€™m done.ā€ So the phrase ā€œFor now,ā€ in this case, is vitally important. I will usually supplement the statement with a ā€œBut Iā€™m ready to continue ifā€¦ā€ and list my conditions for continuing testing.

1 Like

This has differed in every single place I have worked, maybe due to experience or context.

Currently, if Iā€™m the only one doing the testing, then any questions might come back to me. I donā€™t want to be holding the can; so I always will come with a list to the sprint planning meeting.

  • ā€œresource planā€ (who or what will be doing testing) and exactly which build is under test
  • Any automation job, we are relying on to support our release (incidentally the test-job for one of the platforms is offline right now)
  • List all new features I know about, ā€œif itā€™s not in my list its not getting testedā€.
  • Warn of any risks, or features you donā€™t understand. Even if they make you sound like a woosie. If you raise concerns in sprint planning, they are more likely to get actioned.

You have to be able to articulate this in a standup format meeting, itā€™s hard, but if you cannot, then you are doing more than one sprint worth of work. If I have to wait for a late bugfix still, I let people know, that coverage will be reduced for example. If you said it in a formal meeting, no need to write it down really. This above format means I donā€™t have to tell any kind of story (I like Michael Boltons post, here^^ it is really good). Its also good to real JBGE http://agilemodeling.com/essays/barelyGoodEnough.html .

2 Likes

when the the acceptance criteria has been met

1 Like

Luke, I start every testing project by providing an estimation, like in 5 days or in 10 days. So when anyone ask me about testing status, I just reply with remaining of time of taken estimation, like if 5 of 10 days have been tested then 50 % testing is done.

1 Like

Iā€™d respond with ā€˜It dependsā€™.

  • Is the development finished? If not - testing isnā€™t finished either;
  • Are there outstanding risks I havenā€™t tested for? If yes, are those risks agreed with the Stakeholders/Release manager?
  • Is the project/feature still going to be developed or was it frozen/postponed? If frozen - my testing might be finished;
  • Whoā€™s asking and what does s/he expect? We know that ā€˜testingā€™ can mean different things for many people/managers. One can expect: documentation, big reports, scripts, artifacts stored, presentation, release done or release notes, a simple sentence from the tester,ā€¦
  • Is the project still funded/maintained, is there time still left for me?
  • Do I have other priorities as well. Work on other projects?
  • And so onā€¦
2 Likes

Thanks for the link. Like this concept. And this line ā€œThe secret is to learn how to detect when youā€™ve reached the point of something being just barely good enough and then to stop working on it. Easy to say, hard to do.ā€

2 Likes

ā€œThe more I learn, the more I realize how much I donā€™t know.ā€ Einstein

If by ā€œcompleted or doneā€ you mean ā€œthere is nothing else to testā€, I would say this point can never be reached.

If you mean ā€œitā€™s probably good to goā€, I would say:

  • The known risks* introduced by the diff were deeply investigated or the strategy to investigate in production is ready;
  • The previously known behaviors were checked (which is a no-brainer, given you can make a computer do this work);
  • A rollback strategy seems to be ready to be fired if necessary.

* New behavior, regulations, etc.

1 Like

When you can tell the team itā€™s ready for release AND be able to sleep that night.

2 Likes