How do you review the testing on in-flight projects?

This Week in Testing, I jumped on stage with Simon Tomes & Aj Wilson to talk about my efforts to establish a test review process in-line with 8 engineering principles;

  1. We Deliver Value to Customers Frequently and Reliably
  2. We Care About Quality
  3. We Document Our Systems
  4. We Build Observable Systems
  5. We Automate Tests
  6. We Value User Experience
  7. We Build Secure and Available Systems
  8. We Build Performant and Scalable Systems

Each of these principles covers factors which can improve quality and/or testability, for example…

  1. We work in test envs representative of live (managed through IaC), to simulate real-world scenarios
  2. Code is easy to work with and has few areas of technical debt
  3. Test cases document the business logic on which they rely
  4. Logs can be used to identify the steps to reproduce an issue
  5. Automated testing targets the appropriate areas & layers of the system
  6. Accessibility is always considered
  7. Security scans are performed (and actioned!)
  8. Performance is tested with realistic data volumes

The idea being, that a more experienced tester will have a regular check-in with a projects lead tester, and rubber-duck the state of testing on the project. This discussion should facilitate highlighting areas of improvement, in-line with the engineering principles.

So I’ve got to ask, how do you review in-flight projects?

For my own context, I am working at a small consultancy where we often have Test Engineers working on their own, with a development team of 2-4 software engineers. I am looking to embed the test review process as a method to support the Lead Test Engineer on the project. The engineering principles outlined, are organization-wide principles, already in-place. We have other check-ins with testers, to support their wellbeing (line manager) and professional growth (career coach), so this process need only look at the technical and testing health of their project (in my opinion, but I am happy to be challenged on this)!


From my knowledge and the provided context, I can suggest establishing a structured test review process with clear objectives focused on QA, support, and improvement. Schedule regular sync meetings between lead and project testers (e.g. weekly meetings), using a clear known agenda to cover project status, testing progress, adherence to engineering principles, code quality, and issue resolution. Document outcomes/results, set action items and assign them to responsible people, and track progress with metrics/KPIs (defect density, test coverage, etc) Use tools (e.g. Jira, Confluence, Slack) for efficient communication, feedback, storing project/testing data, etc, ensuring the process evolves to meet project needs and engineering principles.

1 Like

Depening on the frequency you want, how about doing Session-Based Test Management?

Is a session of 2-3 hours to short? Meaning you can have multiple meetings a day.
I guess you can adapt it to be less often.

The document mentions even a so called Thread-Based Test Management approach which let you use the same tools, but with meetings when you want them.