How do you decide what testing evidence to collect?

As testers, we wear many hats, such as investigator, explorer, and documenter. Something that links all these together is evidence. Collecting the right kind of testing evidence in the right context helps support our findings and guide better decisions.

There’s no one-size-fits-all rule; it’s about context and professional judgement.

Here’s an activity to help you practise those skills and potentially help others in the MoTaverse by sharing your thoughts:

Task: Map the evidence to the scenario

Below are three testing scenarios. For each one, decide which type(s) of evidence, e.g. test environment, screenshots, videos, logs, or a combination, would be most appropriate to collect.

Scenario 1: UI Glitch in a Web Application:
While testing a web-based dashboard, you notice that a button label is misaligned, making it partially unreadable. The issue occurs only when the browser is resized to a specific width.
What evidence would help you capture this clearly?

Scenario 2: Intermittent API Failure:
You are testing an application that retrieves data from an API. Occasionally, the API returns a 500 Internal Server Error, but the issue is not always reproducible. The frontend displays a generic error message with no details.

What would you collect to support further investigation?

Scenario 3: Slow Performance in a Web App:
A web application takes an unusually long time to load a specific screen when connected to a slow network. The issue occurs inconsistently, and you suspect it might be related to backend response time.

What kind of evidence would help you confirm this?

Share your answers by replying to this post:

  • List the type(s) of evidence you would collect for each scenario
  • Briefly explain your reasoning

I look forward to hearing your thoughts and learning from you

4 Likes

Scenario 1:
UI glitch means Command + 4 or 5. In this case, if the issue only happens with resizing, I would make a screen video to show where the issue happens to demonstrate how it looks before and after the change and to also demonstrate what I’m doing.

Scenario 2:
I would find the error in the browser and check a logging platform if you have one. Logging would give a back trace letting you know what triggered the error as well as some specifics. If you don’t have access, add the network request path, payload, response, and any logs to the ticket, and maybe a screen shot to explain what the user might see if they were unaware.

With more experience, checking the controller can allow you to better understand what happened, why, and how frequently it may occur again. At worst you dig and see its several files deep and unclear to you but best case is you realize its a simple nil value or other easy thing and you know the pending bug ticket (or adjusted MR) is easy.

Scenario 3:
Similar to 2, check the network tab and see the response time and try to reproduce and record the values. Is it slow because its one lengthy request or is it a chain that is executing? I would compare a slow result to a fast one (if possible) with payload information.

Then go to the logs and see if I can do any sort of trace to see other links in the network chain related, sometimes its not just 1 back end but multiple, so seeing if your slow response is actually a slow response from service x, y, or z. Additionally, I’d also look to see other times the logs shows similarly high response times. If theres a pattern, check if its something like a special time of the week, month, or time of day to surface anything that might secretly be throttling your system. Adding those details to the ticket means devs don’t have to think to repeat it and can start investigating themselves. Many a cron job or monthly accounting task have gone rogue and killed performance down stream.

3 Likes

Good Day, @parwalrahul

Collecting Testing Evidence: A Guide to Professional Judgment

Testing is as much art as science. As someone who’s been in the quality assurance trenches for years, I’ve learned that evidence collection isn’t just about following a checklist—it’s about understanding what will truly tell the story of your findings.

Let me share my approach to the scenarios you’ve described:

Scenario 1: UI Glitch in a Web Application

Evidence I’d collect:

  • Screenshots at various browser widths, highlighting the specific width where the issue occurs
  • Short screen recording showing the button behavior as the browser resizes
  • Browser details including version and rendering engine
  • Device details (OS, resolution settings)

My reasoning: Visual bugs need visual evidence. The screenshots provide a static reference point, while the recording demonstrates the exact conditions under which the issue appears. The browser and device information helps establish if this is a browser-specific rendering issue or something deeper in the application’s responsive design implementation.

Scenario 2: Intermittent API Failure

Evidence I’d collect:

  • Network logs from the browser’s developer tools during successful and failed requests
  • Backend server logs covering the time periods of failures
  • HAR file captures of the sessions where errors occur
  • Timestamps correlating frontend error appearances with backend activity
  • Test environment configuration details

My reasoning: Intermittent issues are notoriously difficult to nail down. By collecting comprehensive logs from both client and server sides, you create a timeline that might reveal patterns. The HAR files provide a complete picture of the HTTP activity, while environment details help rule out infrastructure-specific problems. This multi-layered approach is crucial since the generic error message isn’t giving us much to work with.

Scenario 3: Slow Performance in a Web App

Evidence I’d collect:

  • Performance timeline recordings using browser dev tools
  • Network throttling test results at various connection speeds
  • Waterfall chart of asset loading times
  • Backend response time metrics from monitoring tools
  • Database query execution times (if accessible)
  • Memory usage patterns during the slow loads

My reasoning: Performance issues require quantitative evidence. The waterfall chart will show exactly which resources are causing bottlenecks, while the throttled tests help establish if the problem is magnified or only appears under specific network conditions. Backend metrics help determine if the issue is in the frontend rendering or server processing. This holistic view combines both client and server perspectives to pinpoint where optimization efforts should focus.

What evidence collection strategies have worked best for you in similar situations? I’d love to compare notes!

Thanks,
Ramanan
Happy Testing :rocket:

3 Likes

Hello! I use a combo of screenshots/video/logs, and would try and replicate it in our Test enviro.

Scenario 1: UI Glitch in a Web Application:

I would also use F12 in browser and use the “Toggle Device toolbar” to get the specs on the widths, and also Google chrome has a library of what the screen looks like on iphone/samsung etc

Screenshots on a physical mobile/tablet and noting down the device/operating system etc for coverage

Visuals work better in this situation with screenshots/video to better demonstrate where which screen dimensions the glitch happens

Scenario 2: Intermittent API Failure

Assuming it’s on a browser
Video with the console F12 open and checking the network traffic on the Fetch/XHR tab, and click on the logs for any error messages sent and received by the browser. The console tab itself may also have error messages.

Depending if I’ve got access to the backend server logs (which I don’t at my current role) I’d report my browser findings to our developers

Videoing this investigation is helpful, esp when replicating and showing it to developer etc. I also sometimes will get on a teams call with a dev, screen share and walk them through it to help them understand

Scenario 3: Slow Performance in a Web App:

Video the screen with the console F12 open and checking the network calls on the Fetch/XHR option, and click on the logs for any error messages
Videoing the slow load time also helps, as you can add to the ticket, and without wasting other people time on a call lol

3 Likes