How do you "Reporting testing" as a junior tester? - Junior Tester Curriculum

Hi all,

We’re continuing work on the Junior testing curriculum based on the Job Profile we captured with the support of the testing community.

We’re now in the process of going through each task and analysing them to identify the core steps we take to achieve them. For this post we’re considering the task:

Reporting testing

We’ve run a series of community activities including social questions and our curriculum review sessions to identify the steps we need to take to successfully achieve this task and have listed them below.

  • Understand what information stakeholders need/want reporting
  • Collate evidence around testing that has been completed
  • Document the testing carried out in a formal or informal manner depending on template
  • Triage and write up bugs with team members
  • Report what was learnt about the product during and its impact on future testing activities with stakeholders

What we would like to know is what do you think of these steps?
Have we missed anything?
Is there anything in this list that doesn’t make sense?


I have books of notes on reporting so I’ll keep it brief. You mention testing that has been completed, but how about reporting on what’s left to be tested, and what won’t or can’t be tested?

I’d go one further than this “understand what information stakeholders NEED/WANT”. A format alone might not be useful, people don’t know what they don’t know so might say they want metrics because they don’t know how to ask for anything else. We should look to understand the info that people need for decision making and then base our reporting off that…

  • “I need to know how good something really is in order to ship it” - Okay how about a narrative telling you what we tested and saw from that testing, or a demo?
  • “I want to see progress and know you’re doing something” - Okay maybe some test metrics to show the number of tests run today vs. yesterday.
  • “I want a vibe check” - Cool, how about an emoji or a RAG status to give you a high level view?

Hi both,

Thanks for the feedback. Here’s my thoughts on your comments:

My thinking was that Report output of testing and next actions to stakeholders would cover what you’re suggesting. So I decided to make it more explicity so I’ve updated it to:

Report what was learnt about the product during and its impact on future testing activities with stakeholders

I’ve edited the original post to reflect the change.

You’re absolutely right @cakehurstryan and I’ve updated that task to take on your suggestion to:

Understand what information stakeholders need/want reporting

Again, updated the original post with the change.

Thanks for the feedback and clarifications.


Don’t all the activities below ‘stakeholder needs’ depend on it?
I’ve had ‘stakeholders’ that wanted to know one of the following:

  • how do I feel about the quality of this product/feature while face to face for 1 minute, or
  • briefly present to me the most critical bugs that you’d think would block the release of this feature, or
  • how many bugs have you found, or
  • was the automation(test-cases) green/passed?
  • have all the acceptance criteria been met?
  • based on your quality assessment is there anything that you’d suggest to be fixed?
  • How is testing going(how much do you have left, what have you found)?

Sometimes you don’t report to stakeholders, it can be CTO, a dev lead, an architect, a PM/PO, or others so the message delivery can vary.

To me, when reporting on testing I feel that you might need:

  • To know the audience’s background, expectations, and view on testing;
  • be able to filter through what they asked and come up with a reasonable explanation based on evidence obtained through testing;

The more unwanted things for THEM you add to the conversation the more they will zone out, be bored, not understand you and ignore you.