Reviewing Regression Tests - who?

Wondering what people’s thoughts are about WHO should be reviewing regression test packs in terms of coverage etc?
For context, we have multiple streams each with its own testers. Presently I leave the testers to determine their own regression, and as lead (across all the streams), I find that when occasionally the proverbial hits the fan (e.g. serious issue leaks through to production) I need to answer the awkward questions.
Hindsight is a wonderful thing of course and even with review the gaps may not be spotted anyway, but do you feel Test managers/Leads would be responsible for reviewing the test coverage in a regression pack?
I definitely feel the tests need some independent review of sorts.
If I spend time reviewing regression tests and then a post sprint review of the changes then I feel this potentially would be all I would be doing, or possibly bordering on micro managing.

My 2 cents to this.
If you feel it would be useful to have the regression tests reviewed but don’t feel you are in the best position for doing that. I really like, that you consider the micromanaging notion to that, as I see in my reality people not respecting borders.
You could advocate and initiate having those reviews, e.g. in an all-testers meeting, in case you have such.
Potential review setups that come to my mind:

  • A tester of another stream reviews the regression test. Could be nice to share silo knowledge, if this is a topic in your context.

  • The tester pairs up with a colleague from the team. Devs, POs, Support, and other people may not be designated testers but could help testers to come up with ideas, we never thought of. Also helps to share
    knowledge, acceptance of the testing, and creates transparency

  • The team of testers incl. you could do a review in a group

  • My personal on-the-go review happens whenever I have someone to support my testing or joins the team and comes across my test cases/charters for the first time. Whenever I get feedback or questions, that is a review of my tests, if they are clear, understandable, lacking a particular case, etc.

2 Likes

Rotation/secondments. Make sure that at least 10% of every team rotates into another team once a year minimum. This breaks down the silos a tiny bit, but you can do much more more in fact a phrase come to mind. “Community of practice” , jamb that into the search bar or click this linky Search results for 'community of practice' - The Club . But you knew that already @pmfrench ; it’s looking to me like a management/people pain problem which only managers can really solve. Sounds like component owners could communicate better. You do have a release manager? Right?

1 Like

I often prefer developers write them and testers review and coach but regression is a team risk and needs input from both sides.

The reviews though I’d keep light in the form of a show and tell for example whether developers or testers create them, talk through the coverage, what areas of the system and anything new of interest, any areas that the developers would flag for potential additional regression risk that we could go through together in more detail, so its more if a verbal discussion review than actually reviewing tests or how they decided to implement that coverage.

This has worked when we have kept the number of tests fairly light, which I tend to support them always being. When something is missed, do not focus on the specific thing that regressed but look at it from the type of thing that was missed and see what you can do to help them for similar things in future. Perhaps a regression pack is not best placed to catch that sort of thing, maybe for example a code review would be better placed for some regression risks.

Larger products I have though seen regression packs quickly become like Frankensteins monster, in addition to the discussion based reviews these often need technical debt reviews and sometimes very harsh throw those tests away sessions.

2 Likes