As testers, we wear many hats, such as investigator, explorer, and documenter. Something that links all these together is evidence. Collecting the right kind of testing evidence in the right context helps support our findings and guide better decisions.
There’s no one-size-fits-all rule; it’s about context and professional judgement.
Here’s an activity to help you practise those skills and potentially help others in the MoTaverse by sharing your thoughts:
Task: Map the evidence to the scenario
Below are three testing scenarios. For each one, decide which type(s) of evidence, e.g. test environment, screenshots, videos, logs, or a combination, would be most appropriate to collect.
Scenario 1: UI Glitch in a Web Application:
While testing a web-based dashboard, you notice that a button label is misaligned, making it partially unreadable. The issue occurs only when the browser is resized to a specific width.
What evidence would help you capture this clearly?
Scenario 2: Intermittent API Failure:
You are testing an application that retrieves data from an API. Occasionally, the API returns a 500 Internal Server Error, but the issue is not always reproducible. The frontend displays a generic error message with no details.
What would you collect to support further investigation?
Scenario 3: Slow Performance in a Web App:
A web application takes an unusually long time to load a specific screen when connected to a slow network. The issue occurs inconsistently, and you suspect it might be related to backend response time.
What kind of evidence would help you confirm this?
Share your answers by replying to this post:
- List the type(s) of evidence you would collect for each scenario
- Briefly explain your reasoning
I look forward to hearing your thoughts and learning from you