Power Hour - Exploratory Testing

Elizabeth, if I remember correctly, you shared in a talk at Agile Testing Days 2018 that you received feedback on test charters you wrote.
How much context is needed to provide good feedback on charters?
What do you suggest if there is no expert on exploratory testing in your organization that could provide expert feedback?

There are lots if different ways to take testing notes, thinking of the tool you use for this (pen and paper, text editor, screenshots, …) and the content that you actually note down (results matching your expectation, only things you see a potential issue, …).

  • Do you have a default approach for taking notes? How does that look like?
  • If you vary your approach for taking notes: What are the influencing factors that make you decide on the approach for a session?

What tools do you find are great for helping you with exploratory testing?


How does one a noob tester go to learn about exploratory software testing?

1 Like

How can the testing community get better at explaining the value of exploratory testing?


Have you ever, or do you know of anyone who has, conducted exploratory testing in a regulated environment (banking, automotive, medical, etc)? What tips would you have for those who work in such sectors?

If a tool could help with structuring and making successful your exploratory testing efforts, what would be its characteristics?

For example, I could see that it should

  1. Be able to extract repeatable & reproducible scenarios
  2. Be able to analyze the events to a failure

Hi there!, I’ve been looking for practical examples on how to approach API/WS testing by using exploratory, I’ll like to know which differences you see from testing a Web app for example?

  • What is the best way to document and share exploratory test results with the team?

  • Any tips on how to get started and implement exploratory testing into your normal process?

  • What were some struggles or challenges you came across when implementing exploratory testing and how did you over come them?

Most of the time when people talk about exploratory testing, they seem to be testing through the UI. What are your experiences doing exploratory testing on APIs or databases? Or in taking exploratory testing in a more technical direction than working with the UI?

1 Like

I second this post, as I do database testing with ETL and reports generated by SQL, and struggle how I could use ET over heavily scripted testing.

Hi Elizabeth and Simon, can you elaborate on how to get the most out of your testing notes?

How can we share the results of our exploratory testing for best effect?

What are the fundamental areas we should really work on in order to become better exploratory testers?

It depends! In the story I told about the charters, the people knew a bit about the products that would provide the inputs for and consume the outputs of the API we were building. They weren’t familiar with our particular user stories, existing test automation, team dynamics, all of which would have contributed more valuable feedback.

I don’t think you need to be an expert in someone’s discipline to give feedback about their work. Someone less with your project’s context or exploratory testing (developer, project manager, product owner, tester on another team, etc.) might still helpful in reflecting on your testing activities:

  • Say you find something here, will you go fix that bug?
  • Is this more important than that?
  • You said you wanted to see what happens with this kind of data: what about that other kind of data?
  • If you only had three hours to work on this today, which of these charters would you choose and which would you postpone?
1 Like

Thanks for your questions, Thomas.

I struggle to characterise exploratory testing without sharing information that might read like a definition.

Here’s one way I think about exploratory testing: Exploratory testing provides us with useful insight. It helps discover problems that threaten the value of a product. Exploratory testing gives everyone permission to ask questions, share ideas and offer up compliments to colleagues.

Exploratory testing is a style of software testing that encourages freedom to think and explore within the constraints of a specific goal. It aims to answer questions about risks and relies on documented and shareable observations to help teams make decisions.

Exploratory testing is a testing approach and mindset used by testers and development teams across the world. Those that deliberately practice exploratory testing techniques find a sense of great fulfilment and joy in their testing efforts.

Exploratory testing means different things to people in different contexts. Its characterisation and definition creates debate within and outside of software testing communities. Its ok to interpret it how you wish.

Other testing activities might include things such as checking via automation tools and human checking using techniques such a test cases and test scripts.

I often turn to Dan Ashby’s model. I like how it groups testing activities by either assertive or investigative. The former checks against something explicit and the later aims to turn tacit and unknown information into something explicit.

And how cool is it that there are a tonne of power hours already devoted to other testing activities and more!

1 Like

Great question, Thomas. Thanks for asking.

My go to “good” test: Did my session yield information that started a useful conversation and did this lead to a decision that helped my team move forward in the right direction?”. I feel I’m adding value if the answer is mostly yes.

I think it’s awesome you called out self-reflection. Such an important part of improving our skill as exploratory tester. Sometimes “doing better” is as simple as running another charter/session. And this is why I prefer short time boxed sessions, say 30 to 45 minutes. My feedback loop is short if I know I could’ve done better.

I once worked in a team where at the end of a time-boxed testing session I’d debrief my testing notes in person with another tester – preferably as close to as soon as I’d finished my session. I found this an incredibly useful way to get instant feedback on my approach and discoveries. Particularly useful when I first joined the team.

I’d love to find a simple way to track exploratory testing effectiveness over the course of a project. And maybe that’s as simple as counting the velocity of testing sessions. Diving into testing metrics is an interesting topic that perhaps warrants a whole power hour!

Though I’ve never done this before, perhaps there’s an opportunity to use a Net Promoter Score (NPS) approach. For example, take a useful sample set of colleagues and ask: “On a scale of 1 to 10, how likely are you to recommend my exploratory testing skills/services?” (Where 10 is a slam dunk “Always” and 1 is a “No chance!”. And run this periodically to track trends.


Thanks, Abir.

Here’s a treasure trove to get started. Marcel Gehlen provides a total link fest of exploratory testing material – including plenty of “intro to”. It’s well worth checking out, even if you don’t get through all the links: Pathway Exploratory Testing.


Number one is memory. If that’s working great, a lot of the other stuff is less necessary. But let’s assume memory is fallible.

  • browser developer tools: I end up looking the most at the Network tab for things I’m working on at the moment, but don’t underestimate all the other stuff you can use in there.
  • screenshots: I use the Mac built-in Cmd + Shift + 4 keyboard shortcut for crosshairs, then edit them with arrows, boxes, and text in Preview. I like to use a neutral color that doesn’t scream “You did something wrong” but still draws attention.
  • animated GIFs: They can be better than screenshots. JIRA shows them rotating. I use LICECap despite its off-putting name: https://licecap.en.softonic.com/
  • PyCharm: My IDE for writing Python tests. At least half the mistakes I would otherwise make get caught by auto-complete, syntax highlighting, and error highlighting.
  • Mindmaster: For mindmapping.
  • pen and paper: For everything else.
1 Like

I like Michael Bolton’s page to understand what it is we’re talking about, Elisabeth Hendrickson’s description about how to do it well, and James and Jon Bach testing the Staples Easy button to show examples of how to decide whether what you’re seeing is expected or not. If you’re ready for a deeper dive, check out the Black Box Software Testing course materials.

1 Like