Tomorrow weāll be joined by the awesome @ezagroba for an Ask Me Anything session all about test reporting.
Iāll make sure to add any questions that we donāt get to and any resources mentioned during the session to this thread.
If you miss the live session, a recording will be available on the Ministry of Testing website for all Club level members once weāve edited it and added captions.
Have you got any more questions youād like to ask after? Add them here
Do you have experience in having to keep around reports for specific releases that happen quite frequently? Any good practices for leaving results documented for posterity?
What about reporting on trends in test execution. Any approach/tools you recommend for analytics?
Do you have any suggestions on how to also highlight what went Well in the Test report.
Do you find that regularly providing good test reports contributes to meaningful change in the Product Development process as a whole?
Is there a good accessibility testing training that you would recommend?
Have you used a testing report tool like qa touch? whay do you think about it?
how do you deal with client reported bugs? they usually log critical bugs for minor issues. Do you add them in a separate section in your report?
What type of graph format need to use to indicate issues ?
What are the criteria that need to decide on test status : Pass/Fail/Conditional Go
Should we be providing just metrics for our reports or should we look at providing information that answers key questions?
How can exploratory testers create their portfolios?
What do your reports look like? How do you organise within your reports?
What tools (or strategy?) do you prefer to use for test reporting? Or if no preference, how do you decide what is the best tool/strategy to use?
Should a Test report be pushed to the version of the code it relates (for that specific environment)?
how long does it take you to work on each test report?
For flaky automated tests, our nightly pipeline makes it obvious when a test fails. I try to be strict about getting the test passing every time or deleted before standup the next day. To balance the importance of the feature with how much time weāre spending diagnosing it, weāve also got some tests commented out or running multiple times before failing.
For flaky behavior when testing without automation, structure your report like a cliffhanger murder mystery. You went to the page, you clicked this button, and youāll never guess what happens next!
My approach: It depends. These examples below are what I do most often, but are not exhaustive.
For a standup, I want to include details that were particularly frustrating so I can find out if I need to keep hacking away, identify someone to help, or have someone tell me to give up.
When I find stuff while testing a user story, Iāll chat the developer directly. Usually itās something simple (a missing constant, the wrong branch) thatās quickly resolved. Bigger questions that need more input move to the team channel or a video call. If you find something small and weird but unrelated, give your team the gift of starting your message with [not urgent] and a description of what your current context is since theyāre probably context-switching. (Bonus points if you get them to pair or do nothing during testing so theyāre not switching contexts.)
For a closing comment on a user story, I want to describe what I did well enough for me or someone else on my team to replicate it. Often Iāll start writing the report mid-way through the testing to discover what I forgot to test.
Number of test cases people: I havenāt worked in a context like this. Iām curious if the people asking think that all test cases are the same complexity, or why theyāre asking. Dig into what theyāre worried about to figure out what needs you should be addressing.
If youāve delivered incrementally, hopefully youāve got incremental reports too, with details about how you tested, what you didnāt test, what you noticed while you were doing it. Your past self can help guide you at crunch time.
Yes, this is a great idea. Anything that makes it clear what version/point in time/circumstances you were testing under will help anyone looking at a document in the future.
I would love for this to be true, but it has not been my experience. You can have a really good report that changes nothing if nobodyās listening, or the change thatās needed is outside the influence of your audience.
I havenāt used QA Touch or anything like it. I understand the desire to cover all requirements, but I disagree with the premise that all requirements can be stated explicitly. Recommended reading:
I learned a lot about accessibility by reading the whole iOS Human Interface Guidelines when I was on an iPhone project and following the Hack Design course over a year, but I suspect there are more straightforward ways to get this information now.
I donāt think these are mutually exclusive. If metrics answer key questions, provide them. The metric I report most often is how many hours I have available for testing activities.
āPassā and āfailā sound like things that happen to individual tests, whereas āconditional goā sounds like a software release decision. Whatās the worst that could happen if you released this software right now?