Transparency and visibility around your work as a tester is very important. Without that it is very hard to create value at all.
Exploratory testing done right can generate a lot of value in the form of quality intelligence, which in turn is then used to make decisions. Without transparency and visibility that quality information will never be used to make those decisions, such as fixing a bug or approving a release.
I gave some examples of how to provide transparency and visibility in an article I wrote:
I like to start at the end and work backwards. As in, what would a good exploratory testing session look like at the end after youāve debriefed with the right people about your discoveries?
And then use that to work that into what Iād typically capture.
My go-to for that is @cakehurstryanās article on debriefing.
Thanks a lot, my answer to all those question you asked is yes. I have been doing manualy exploratory testing and we have not had the right tools I believe to capture and follow up on the results.
Itās much easier to talk about manual testing results, automation results are my life, and also my nightmare, automated tests are always around about 10% flakey, and thatās a huge distraction. I almost envy your focus in manual testing.
Iām biased, but I loved the Elizabeth Hendriks book Explore It, Iām not saying itās the best resource out there, and there are many good online resources which have the advantage of being 2 way channels. But that book really kicked up all of my manual testing work (Iām a coder, and Iāll never be as good at manual testing as you will someday be.)
My approach to capturing and following up on exploration is usually to keep it simple.
Make sure you capture what youāve explored and seen
Make sure you share it with someone (like @simon_tomes shared)
For what tool to make test notes, well thatās personal preference. GSheets, GDocs, Mind maps, Video recordings all work (just make sure you edit them to make sense after the testing). Plus itās helpful to have a format you can use with your test management tools (luckily things like Jira / XRay / Zephyr allow you to attach most things).
For following up on results Iād make sure to use the tools the rest of the project use.
Capture things in Jira (if they use that) to provide transparency.
Use a test management tool (XRay / Zephyr) to create reports on coverage.
Create a column in the workflow that says ādebrief / demoā and use it like devs would a PR (Pull Request) to highlight you want to share testing outcomes.
What tools and techniques you use will be tailored to how your team works
Agree with Callum, make sure your test-execution work is also reflected as a task ticket in sprint boards, that will raise visibility and transparency. Separate release testing from feature testing and separate that from work on test environments. Doing ājust that one little thingā will make people (and yourself) take your value more seriously.
I suggest to also ask the people to which you will show reports/notes what they want to see. This might be different things to different people.
One approach having an overview is this:
I demonstrate value of exploratory testing by what I find out and what discussions this then triggers. Bugs are just a subset of this. Giving people certainty what was / will be tested another.
Sometimes by discussing what to test we find out that something important is missing in the specification or needs somehow to be clarified.
What the picture at this article shows is the value that (exploratory) testing gives to others.