How do you respond when someone ask you "have you completed on feature ###?
Or "have you completed testing for project ***?
How do you respond when someone ask you "have you completed on feature ###?
Or "have you completed testing for project ***?
I know when it is completed when the acceptance criteria has been met, the regression scripts are ready (if they need updating), accessibility requirements are met, any security testing has been done, cross-browser tested etc.
I have never done a written checklist but I have a mental one depending on the ticket itself.
Often asked
For me: Itās done when I have a plan that I executed on; stats and specs are less important than questions like āhave the bugs we said we fixed been fixed?ā. Does the new feature happy-path work?, did the automation for the feature run, stability is less a concern. Lastly, have I had time to look for side-effects around the parts of the system the devs have let me know about possible impacted, in the run-up to release.
And only if I have described what I have checked, to stakeholders and they are happy, then we can move to release. Remember, major releases do need retrospectives, if you ARE improving your own process between each major release, you are going to have a better release than the last one.
Iād be careful to specify the extent of testing in any answer. We canāt exhaustively test for every outcome, we can make our best judgement.
Honestly my knee-jerk answer to this is āwhen itās been in production for 6 months with no customer bugs raisedā.
For me the best answer to this question came from a company that creates airplanes. And the answer is specifically their approach to risk based testing. This is a long one so hold on. First of when they introduce a feature to be done they defined the hazards. Basically each feature is a hazard. The example I was given was the one of a razor blade used for shaving. Thatās the hazard. Then they identify an outcome. Like cutting myself when I shave. That is medium probability and low impact. Versus me cutting an artery when I shave. That is low probability and high impact. Each of these combinations are the risks.
So each feature is a hazard, each combination of probability and impact are the risks. This was a team effort coming up with the risks.
Then they had categories for each risk zone. In a simplified way they would specify high risk to do for instance. Simulations, Boundry / Domain Testing, Negative Testing and Happy Path Testing and Alternative Paths Testing. Medium risk might be Happy Path and Domain Testing. And for Low risk they might do Smoke Testing only.
So in essence the risk defined the test scope. So they could simply answer which of the defined tests they have done and which they have left every time anyone asks.
Exactly which test techniques you need to employ for the different risk groups is as most thing heavily dependent on your context, but the principle of having a test strategy that says. Setting up a strategy where this kind of thing (high risk) will be tested using these techniques can help you answer the question.
Good Luck
How did stakeholders respond to this?
Did you also share what your planed tests? Did they agree with the planned tests?
Is that when your confidence level in the product quality is good enough?
It really depends on context but in short yes. If I know all they are doing is changing the alignment of an icon and it has gone through code review and cross-browser testing thatās different to say a feature rewrite.
When it comes to large features or areas that touch on other places then I gather evidence of it working e.g accessibility matrix or ACās to support my sign off. It is also when my confidence is lower (naturally) so having the evidence also means that I know myself where things are at.
To be honest, I am not the decision maker on something being ready to release/testing being completed. That is the responsibility of my senior manager. (Not trying to shift āblameā, itās just the way it is)
For this answer, I will refer to [this blog by Michael Bolton].(https://www.developsense.com/blog/2018/02/how-is-the-testing-going/)
The short version is, I try to tell stories about the product, about how I tested the product, and the quality of the testing.
If I am satisfied with the stories, then the answer is, āFor now, my part is done.ā This almost never happens. Itās probabaly my fault, Iām somewhat of a perfectionist.
If Iām not satisfied with the answers, but someone who makes decisions about my work is satisfied with the answers, then the answer is, āIām not totally happy with it, but for now, my part is done.ā This happens more frequently.
If nobody is satisfied with the answers, then the answer is, "{This part of the story} needs work, but {other part of the story} looks pretty good. Or maybe the answer is some variation of that.
I think it is important to mention, the ACTUAL quality of the product is not related to my answers to the question. If I started the thing Iām testing once, and it crashed hard, then the quality of the product is bad, the testing story is very short, and the quality of the testing is only related to that one test. But, the answer is still, āFor now, Iām done.ā So the phrase āFor now,ā in this case, is vitally important. I will usually supplement the statement with a āBut Iām ready to continue ifā¦ā and list my conditions for continuing testing.
This has differed in every single place I have worked, maybe due to experience or context.
Currently, if Iām the only one doing the testing, then any questions might come back to me. I donāt want to be holding the can; so I always will come with a list to the sprint planning meeting.
You have to be able to articulate this in a standup format meeting, itās hard, but if you cannot, then you are doing more than one sprint worth of work. If I have to wait for a late bugfix still, I let people know, that coverage will be reduced for example. If you said it in a formal meeting, no need to write it down really. This above format means I donāt have to tell any kind of story (I like Michael Boltons post, here^^ it is really good). Its also good to real JBGE http://agilemodeling.com/essays/barelyGoodEnough.html .
when the the acceptance criteria has been met
Luke, I start every testing project by providing an estimation, like in 5 days or in 10 days. So when anyone ask me about testing status, I just reply with remaining of time of taken estimation, like if 5 of 10 days have been tested then 50 % testing is done.
Iād respond with āIt dependsā.
Thanks for the link. Like this concept. And this line āThe secret is to learn how to detect when youāve reached the point of something being just barely good enough and then to stop working on it. Easy to say, hard to do.ā
āThe more I learn, the more I realize how much I donāt know.ā Einstein
If by ācompleted or doneā you mean āthere is nothing else to testā, I would say this point can never be reached.
If you mean āitās probably good to goā, I would say:
* New behavior, regulations, etc.
When you can tell the team itās ready for release AND be able to sleep that night.