End of test review - who's involved?


When it comes to reviewing the testing completed for a feature, wondering how you ensure that testing has covered the right things and that every one is happy.

  • Do you trust the tester?
  • does the BA , feature team, tester review the test scenarios together?
  • something else, Or
  • “it depends” do what might be required based on the complexity of the feature.

Within your teams do you take the time at the end before signing off to fully review?


For complex issues we like for our product experts to have a second look. They are aware of all the special configurations we might have missed and all the flaky features. If we are testing an area where we don’t usually test a colleague from the test team with more experience might have a look.
We don’t do test reviews for everything though.

  • Review test cases/checklists with your QA team members and with your QA lead, if you have them :slight_smile:
  • Review test cases/checklists with devs, especially for complex features or technical changes
  • In some cases, BA or FO (technical) may provide valuable insights on testing coverage

And I would suggest doing it not at the end when testing is completed but before and/or during the testing.
Additionally, to ensure “that everyone is happy” have some sort of demo and/or acceptance review at the end of testing from stakeholders - FO, BA, Designer, PdM, etc but it’s about “that testing has covered the right things”, well, partly about that but rather that they like the result and don’t see any issues which will help you to understand that “that testing has covered the right things”


All sounds very sensible that.

Really this was about trying to gauge what folk actually do in their place of work (or past examples which have worked well).

1 Like

First, I would say trust and verify.
To verify; can the tester explain the functionality and its associated risks(where and how likely it is to fail)? If so, you are on the right road. Next, ask about coverage; again, if the tester can explain coverage in relation to the risks, things are looking good. Now, check the test execution. have all the risks been tested? If you still feel there are risks, having other experts test is always an advantage.
Doing all this close to a release is not good. Depending on you methodology I would expect this information to be in the test strategy or discussed and agreed during story refinement.
I will also say I am not a fan of ‘signing off’ unless it is very clear what ‘signing off’ means. In my experience, everybody has a different view. My solution is to either automate the exit criteria or bring everybody into a room, and everybody has their say. The release is a joint decision. Importantly, everybody does not have to say yes.


Wait, why would just the testers have a review, surely the product investment in quality as well as things like security even, is the entire team?

But to answer the question: Not often enough. The rush to do manual and automated testing ahead of a release is just so great that one has to often say “this is better than what we had before”, and then move on to the next thing. Which is really why making it easy for everyone (and I really mean other teams when I use the word everyone) has a chance to inject quality.

1 Like

Welcome to the MOT community Ken. Yes, very thought provoking insights there. Timing of communication needs to be played with and again not everyone has to agree, it’s an undermining of quality evolution when we use strict policies and procedures to try and force people to just toe the party line. Keen to get more of your insights.

1 Like

Not sure what you are saying here, Conrad.

I had picked out this key point of yours @kenbren . It’s not every day that people remind us that rushing is itself a tactical decision, and really a “process-driver”. It forces everyone when under deadline pressure to merely agree and either release too early, or agree and continue the death march with one more release-candidate. For years, @kenbren , I have felt extreme pressure to just say “yes” so that I don’t become the bad guy in the room. I’m a bit on the shy side and explaining why I think the quality is not enough to release now if I say “No” is stressful, but perhaps I need to. Because when I as a tester do say “No I NEED MORE TIME”, it can bring me better closure, as well as better understanding of the company goal.

I’m not entirely sure when timing wise @monsieurfrench is when he says end of test review, because even during the release meeting and even after we publish, I carry on testing. Just in case I can find a bug in live that we could not find in the test environment, and I sometimes do. I just hope that what we find after release does not prompt a hotfix, but we keep trying because testing of a release never truly ends, we just suddenly get customer metrics back from our biggest tester. Why would we exclude their feedback as if they were also real testers?

Where I work, we each of 5 teams have a meeting to decide whether we will or won’t release, much like everyone else. I rather enjoy that testing has over the years become a key part of these kinds of meetings. Some call them “war rooms” when a release is slipping it’s deadline, and some call them “release readiness review” or RRR when times are normal. And QA results are always under the spotlight, but what I don’t like, is that the increased interest in what QA have to report misses the big point. Because, a QA team can only detect some of the product risks, and these meetings often ignore broader business goals, at a time when often, a team just want to get stuff out of the door because it helps their metric of releasing often and releasing quickly. Security is something that 10 years ago was also not a big part of the review, so that’s a healthy shift.

Everybody has a say in the meeting, but when we rely on automated test results, we ignore some risks, and we also ignore how automation results rarely have the same meaning across your entire tech stack. Some teams have got good automation, but automation is a poor gauge of the product UI and performance experience.

@conrad.braam, In the past, I took the quality of the solution in production very personally. I don’t any longer, I think the following;

  1. If there is a defect in production, I accept that the test/QA missed it. That does not mean we ‘should’ have found it, or we should beat ourselves up. You analyse it and learn what you can do to improve things. There will always be defects in production, we try to avoid the critical ones.
  2. The perceived quality of a solution is more than just defects in production. It can be a big one, but it might also include the speed of fixing defects, how support reacts to your customers, training, new features, etc.
  3. Do your best to help the broader organisation understand quality and be the trusted advisor. There will be ups and downs and frustrations, which are part of the job. But there are also other companies.
1 Like