Ask Me Anything: Test Reporting

Very good question @fullsnacktester :+1: - had similar thoughts on this.

2 Likes

What is your approach to test reporting? How do you break it down?

For those, who are working with people used to test reports being number of test cases passed etc, what advice do you have for them?

2 Likes

Do you report on flaky tests?
How do you structure and deliver your reports to get better engagement from viewers?

2 Likes

The Power Hour Elizabeth mentioned Power Hour - Exploratory Testing

Questions we didn’t get to

  • Do you have experience in having to keep around reports for specific releases that happen quite frequently? Any good practices for leaving results documented for posterity?
  • What about reporting on trends in test execution. Any approach/tools you recommend for analytics?
  • Do you have any suggestions on how to also highlight what went Well in the Test report.
  • Do you find that regularly providing good test reports contributes to meaningful change in the Product Development process as a whole?
  • Is there a good accessibility testing training that you would recommend?
  • Have you used a testing report tool like qa touch? whay do you think about it?
  • how do you deal with client reported bugs? they usually log critical bugs for minor issues. Do you add them in a separate section in your report?
  • What type of graph format need to use to indicate issues ?
  • What are the criteria that need to decide on test status : Pass/Fail/Conditional Go
  • Should we be providing just metrics for our reports or should we look at providing information that answers key questions?
  • How can exploratory testers create their portfolios?
  • What do your reports look like? How do you organise within your reports?
  • What tools (or strategy?) do you prefer to use for test reporting? Or if no preference, how do you decide what is the best tool/strategy to use?
  • Should a Test report be pushed to the version of the code it relates (for that specific environment)?
  • how long does it take you to work on each test report?
1 Like

Yes, I do report flaky tests.

For flaky automated tests, our nightly pipeline makes it obvious when a test fails. I try to be strict about getting the test passing every time or deleted before standup the next day. To balance the importance of the feature with how much time we’re spending diagnosing it, we’ve also got some tests commented out or running multiple times before failing.

For flaky behavior when testing without automation, structure your report like a cliffhanger murder mystery. You went to the page, you clicked this button, and you’ll never guess what happens next!

3 Likes

My approach: It depends. These examples below are what I do most often, but are not exhaustive.

For a standup, I want to include details that were particularly frustrating so I can find out if I need to keep hacking away, identify someone to help, or have someone tell me to give up.

When I find stuff while testing a user story, I’ll chat the developer directly. Usually it’s something simple (a missing constant, the wrong branch) that’s quickly resolved. Bigger questions that need more input move to the team channel or a video call. If you find something small and weird but unrelated, give your team the gift of starting your message with [not urgent] and a description of what your current context is since they’re probably context-switching. (Bonus points if you get them to pair or do nothing during testing so they’re not switching contexts.)

For a closing comment on a user story, I want to describe what I did well enough for me or someone else on my team to replicate it. Often I’ll start writing the report mid-way through the testing to discover what I forgot to test.

Number of test cases people: I haven’t worked in a context like this. I’m curious if the people asking think that all test cases are the same complexity, or why they’re asking. Dig into what they’re worried about to figure out what needs you should be addressing.

2 Likes

If you’ve delivered incrementally, hopefully you’ve got incremental reports too, with details about how you tested, what you didn’t test, what you noticed while you were doing it. Your past self can help guide you at crunch time.

If you don’t have incremental reports, take Nancy Kelln’s approach.

2 Likes

Yes, this is a great idea. Anything that makes it clear what version/point in time/circumstances you were testing under will help anyone looking at a document in the future.

2 Likes

It depends on your audience: what do they care about? What will make them pay attention?

Unfortunately my go-to reporting tools of mind maps and annotated screenshots don’t pair well.

2 Likes

I would love for this to be true, but it has not been my experience. You can have a really good report that changes nothing if nobody’s listening, or the change that’s needed is outside the influence of your audience.

2 Likes

What my reports look like: see this other response.

How do you organise: Assume the receiver is only half-paying attention. Put the most important information first so they want to pay more attention.

1 Like

I haven’t used QA Touch or anything like it. I understand the desire to cover all requirements, but I disagree with the premise that all requirements can be stated explicitly. Recommended reading:

1 Like

I’m curious why you need to be analyzing trends. Tests should pass. Watch this.

1 Like

No, I don’t have a good answer for this.

I learned a lot about accessibility by reading the whole iOS Human Interface Guidelines when I was on an iPhone project and following the Hack Design course over a year, but I suspect there are more straightforward ways to get this information now.

2 Likes

It depends, but after attending many corporate indoctrinations, I can tell you that graphs that go up and to the right are the most dramatic.

2 Likes

I don’t think these are mutually exclusive. If metrics answer key questions, provide them. The metric I report most often is how many hours I have available for testing activities.

1 Like

My best answer here is: no. I don’t think most test reports (or software) are needed for posterity. Nobody’s reading them, don’t save them.

Our pipeline saves logs of recent runs. This other question suggested another approach.

@simon_tomes and I wrote about test reporting and audits in a regular environment here.

1 Like

“Pass” and “fail” sound like things that happen to individual tests, whereas “conditional go” sounds like a software release decision. What’s the worst that could happen if you released this software right now?

2 Likes

If you’re including this information, you’re on the right track. It’s much easier to deliver good news than bad news. Like any test report, providing an example and asking for feedback from your audience can guide you.

1 Like

It depends. For a standup, i might prepare for a few minutes to deliver 30 seconds of reporting. At a crunch time for a project, reporting tends to take more of my time than testing.

2 Likes