Critiquing your own testing

I’ve been wondering how to critique your own testing. By this I mean, figuring out where the gaps in your testing strategy/thought processes are, figure out where you might need to learn more, even looking at how you ask questions and what you ask to see if you’re getting the right information, things like that.

I review work with as many team members as possible to find gaps in my thinking, document what I have done to find what I’ve not done, etc., but I feel that I’m missing a big picture strategy.

What do you do?


The only way I can do it is by the issues that are found once a product or change goes live.

Once I know things I missed then I can go through working out if there was a different way of approaching my testing or questions I could have asked that may have uncovered that problem in test rather than Live. If I can think of something that would have helped then keep that in mind when the next piece of work comes along.


I don’t really do it explicitly, but generally I look at the pattern and nature of bugs that I’ve ‘missed’ and factor the lessons learnt into my testing approach going forward.

e.g. “When a change is made to feature X, I also need to spend some time testing feature Y. If I don’t, then bugs like A, B or C could occur again in future.?”

1 Like

I think what you’re talking about is finding suitable levels of coverage. A strategy is how to achieve coverage, so gaps in strategy mean gaps in coverage. Given that coverage is a “good enough” concept, and only makes sense with respect to a specific model, I need to know by what models I need to achieve coverage and how thorough that investigation needs to be.

So I need to have good contextual knowledge for informed risk analysis, and I need to have good product knowledge to understand what details need to be covered. If I’m missing a big picture strategy it’s usually because I don’t understand high-level concepts about the project. One way to tackle this is to write out a paragraph describing what the product is for and who the product is for, that way I know I understand why it exists and who the audience is. Product websites and sales materials usually help here, because that’s what the users are expecting when they hand over money. Talking to product support (phone support, IT support, Operations, whoever) can help me understand who my internal clients are. When I understand what it’s for and who it’s for I can better establish coverage with respect to models that fit purpose and usage.

If I’m looking for new ideas in the product, here’s a few techniques I use:

  • Use inside-out and outside-in risk analysis, coding and categorising my findings
  • Using project-specific risk catalogues
  • Using generic catalogues (HTSM, lists of verbs, and others)
  • Inventing user scenarios
  • Pair testing (sometimes with a tester)
  • Static review of the code
  • Creating and questioning explicit models of processes, architecture, and interfaces
  • Evaluating risk by framing via the 5-fold test system
  • Freeform exploratory sessions, breaking patterns of recent testing focus
  • Other defocus techniques during exploration that may generate ideas (multiple factors at a time, broader observations, varying models)

Hope that’s helpful!


Which of course just goes to show how important it is for testers to be included in a project from as early a stage as possible, rather than having a finished product dumped on them with the instruction “Test this.” Which still happens.


That’s really helpful, thanks @kinofrost! I’m in a wierd situation with the type of product we have, how the team is laid out, and the interactions we have (or don’t have), so this is a good reminder of all the things to look at, and see how I can apply them to this set up.

I’m also taking over a sole tester role, and there was no overlap, so I’m also looking at the test suite that is in place and seeing how I can build on and improve that with no input from other testers. An interesting learning curve, but I think I’m getting there :smiley:

1 Like