Does anyone on here have a requirement in their testing procedures to demonstrate the testing that has been completed, for example in forms of screenshots, video captures etc.
I’m trying to get an understanding of prevalent this activity may be and in what domains?
Or, for example, do you “demonstrate” by just making testing notes and then perhaps a show and tell to the team?
Hello, we do not have a fixed procedure for that, but usually I add a comment to the ticket with details of what has been done and that it matches the expectation, i.e. that the bug did not appear again. In some cases a screenshot or videos will be added, but I use that not that often. In cases where it makes sense I use it of course, and sometimes a short demo of the feature/fixed bug is done as well. But that depends on the feature or bug …
I once worked in a regulated environment (a Med Tech company) where we’d attach our exploratory testing notes to the appropriate Jira. The contractual requirement was to demonstrate that we’d run and passed a load of test cases, and the “extra” exploratory testing notes were a bonus set of evidence.
Yet I think there’s huge value in debriefing our exploratory testing notes in a way that works for the appropriate audience.
Some developers were pleasantly surprised when I’d rock up at their desks with my digital notes, ready to have a conversation about what I’d discovered.
I think the simplest way of sharing how I’ve done it in the past is via this video. Sorry I’ve not taken the time to describe it here. I think the video demonstration of an actual exploratory testing session does a better job of that. Let me know if you have any thoughts or questions.
I’m always curious to see how folks capture notes during their exploratory testing sessions. Very much welcome more replies about how folks demonstrate their testing efforts.
Hello @monsieurfrench, in my experience showcasing testing completion often involves tools like - Jira/Testlink/Asana, where I raise bugs with the detailed info including the description, expected & actual results, steps to reproduce, and ALWAYS ATTACH videos/screenshots for providing a visual representation of the bug and making it easier for developers to visually understand the problem. It helps to convey the complex scenarios/UI-related issues more effectively.
@simon_tomes Hello Hello!!! Regarding capturing notes during exploratory testing, it can be done through a test management tool, a dedicated document or a note-taking app like - Squash TM, MindMeister.
The focus is on documenting observations/gaining context/steps taken/unexpected behavior encountered including browser versions/OS/devices as exploratory testing is dynamic and adaptive and use time stamps to record when specific observations/issues occurred.
I have detailed it as if it were a manual test case. In my last gig we used Azure DevOps which has a decent means of authoring test cases. I could write up exploratory activities as I performed them and attach them to the story in the sprint/card in kanban board.
They could also then be added as backlog items for automation stories
In regulated environments where you need “objective evidence” of your testing, yes, things like screenshots or video captures are often necessary.
In unregulated environments, I’d say only collect what you get value from. Modern automated tools like Cypress and Playwright (I know for the former, I think for the latter since I haven’t used it myself) can automatically capture screenshots and/or video, which can be quite helpful for debugging/investigating failures. But if you’re doing extra work to demonstrate what was tested and no one’s actually looking at it or take action on it? I’d suggest spending that time doing more actual testing and communicating instead.
We should always write down the evidences, etc:
We tell about the product and its status, what we have learned, what is working, what is not working, etc. We should explained how we tested it, environement, configurations, utils we used, etc. And do not forget to tell about how good our testing was.
@juanalvarezarquillos I’m going to challenge your “always” assertion. Can you think of some examples of when “writing down the evidences” could actually be unhelpful or counterproductive? I can think of some, but I’ll wait a little while and see what you come up with before posting them .
I’m not saying we shouldn’t write notes, the question was more about taking step by step screenshots for example (and specifcally trying to gauge how many people are required to do this in their place of work - this could be because the process dictates this for internal audit, or because you may have to demonstrate to others specifically the test scenarios that were executed and the outcomes)
I’ll have a look at that test buddy to see if there’s any trial version etc.
The video was good - I do find from experience that a lot of people seem to struggle with the concept of session based exploratory testing (separate topic) this might get bookmarked to pass around.
This is my feeling completely and the post of the post was to more to try and gauge perhaps how many folks are spending time collecting evidence (either because they need to, or because they think they need to because it’s “good practice”). Simple way of putting this, if you have 2 hr testing session - would you rather complete a few well documented tests, or done more tests and have more information via notes
We have in our exploratory standards that test documentation is provided. We’ve used TestBuddy and now we use the X-Ray Exploratory testing app do achieve this. there’s also @dacoaster’s Yattie as well Either app allowed us to share what testing had been done and attach it to the relevant story.
Sometimes we’ve also done demo’s as well if needed.
It’s been a few days so I’ll go ahead and answer my own question:
Can you think of some examples of when “writing down the evidences” could actually be unhelpful or counterproductive?
Some that I can think of:
Development has already been made aware of a particular issue and has confirmed that they’re aware of it. Continuing to report that same issue would likely be counterproductive and erode your credibility, so if you observe it, it might be better to move on immediately rather than taking time to write anything down. This is actually based on a specific past experience when I was a developer–another part of the team had hired a contractor who ended up not being qualified and was let go pretty quickly, but in the interim I got sick and tired of them filing bug reports that clearly all had the same root cause that we had repeatedly asked them to stop filing reports about because we already knew about it. Don’t be that person!
Testing early-stage, in-progress, or prototype work. It might be cheaper to just ask the developer “Hey, do you know about this? Is this what you expect to happen?” as more of an over-the-shoulder check. This relates to what Cem Kaner called “sympathetic testing”–basically being less rigorous or formal when you know what you’re testing is acknowledged to not really be fully “done” or “ready”.
Maybe some things we decide to self-filter rather than reporting, because of time pressure or prioritization–we know or strongly suspect that there are “bigger fish to fry” so we avoid derailing our own focus to document a control being one pixel off or something similarly minor, and continue on looking for the bigger problems.
@monsieurfrench YATTIE is an open source tool designed with these sorts of situations in mind. Its an installable app as its intended to test and report on more than just web apps, but we are also taking a page out of TestBuddy’s book and launching a wholly online version. If that is something that appeals to you, let me know and I’ll get you in on the free beta.
Generally I think that reporting on testing that is done is one of the weakest points of most tools currently out there. It’s really really hard to briefly but meaningfully capture the extent of work that has gone into an exploratory testing session or even testing more generally - manual testing, automated testing, etc.
So that’s something that is on my mind a lot and that we talk about a lot in our open source community. I’d like to find a better way to do it that doesn’t make the tester’s life harder or more tedious.
Since everyone is mentioning apps…the desktop app that I am working on (for over a year at this point) allows to share (IMHO) the most comprehensive traceability when you do exploratory testing.
Video of your screen: to help reproduce bug
Browser network activity: to investigate bottlenecks
Browser developer console: JS errors
Any other app running in the background you want to capture (a local database instance for example): more errors, perhaps from the OS/machine/services
I have spent so many years dealing with demonstrating that a bug is “real”. As a developer I have experienced shoddy bug reports, as a technical lead I have fought with a lot of people to ensure there was a “procedure”.
I think the final form of a bug report needs to look like this:
Evidence (video, logs, screenshots, detailed repro steps, machine specs, operating system, software version(s) of the affected platform etc)
A little blurb about what you were expecting to happen: because sometimes (not often) it’s not a bug, just bad UX → still needs to be addressed.
Assign a priority!!
Compile all of this into a ticket (JIRA - or whatever floats yer boat) and tag a member. This ensures 100% that someone is going to take a look. Be gentle, sometimes you tag the wrong person, just ask them to tag who’s the best person to their knowledge.