How does one a noob tester go to learn about exploratory software testing?
How can the testing community get better at explaining the value of exploratory testing?
Have you ever, or do you know of anyone who has, conducted exploratory testing in a regulated environment (banking, automotive, medical, etc)? What tips would you have for those who work in such sectors?
If a tool could help with structuring and making successful your exploratory testing efforts, what would be its characteristics?
For example, I could see that it should
- Be able to extract repeatable & reproducible scenarios
- Be able to analyze the events to a failure
Hi there!, I’ve been looking for practical examples on how to approach API/WS testing by using exploratory, I’ll like to know which differences you see from testing a Web app for example?
What is the best way to document and share exploratory test results with the team?
Any tips on how to get started and implement exploratory testing into your normal process?
What were some struggles or challenges you came across when implementing exploratory testing and how did you over come them?
Most of the time when people talk about exploratory testing, they seem to be testing through the UI. What are your experiences doing exploratory testing on APIs or databases? Or in taking exploratory testing in a more technical direction than working with the UI?
I second this post, as I do database testing with ETL and reports generated by SQL, and struggle how I could use ET over heavily scripted testing.
Hi Elizabeth and Simon, can you elaborate on how to get the most out of your testing notes?
How can we share the results of our exploratory testing for best effect?
What are the fundamental areas we should really work on in order to become better exploratory testers?
It depends! In the story I told about the charters, the people knew a bit about the products that would provide the inputs for and consume the outputs of the API we were building. They weren’t familiar with our particular user stories, existing test automation, team dynamics, all of which would have contributed more valuable feedback.
I don’t think you need to be an expert in someone’s discipline to give feedback about their work. Someone less with your project’s context or exploratory testing (developer, project manager, product owner, tester on another team, etc.) might still helpful in reflecting on your testing activities:
- Say you find something here, will you go fix that bug?
- Is this more important than that?
- You said you wanted to see what happens with this kind of data: what about that other kind of data?
- If you only had three hours to work on this today, which of these charters would you choose and which would you postpone?
Thanks for your questions, Thomas.
I struggle to characterise exploratory testing without sharing information that might read like a definition.
Here’s one way I think about exploratory testing: Exploratory testing provides us with useful insight. It helps discover problems that threaten the value of a product. Exploratory testing gives everyone permission to ask questions, share ideas and offer up compliments to colleagues.
Exploratory testing is a style of software testing that encourages freedom to think and explore within the constraints of a specific goal. It aims to answer questions about risks and relies on documented and shareable observations to help teams make decisions.
Exploratory testing is a testing approach and mindset used by testers and development teams across the world. Those that deliberately practice exploratory testing techniques find a sense of great fulfilment and joy in their testing efforts.
Exploratory testing means different things to people in different contexts. Its characterisation and definition creates debate within and outside of software testing communities. Its ok to interpret it how you wish.
Other testing activities might include things such as checking via automation tools and human checking using techniques such a test cases and test scripts.
I often turn to Dan Ashby’s model. I like how it groups testing activities by either assertive or investigative. The former checks against something explicit and the later aims to turn tacit and unknown information into something explicit.
And how cool is it that there are a tonne of power hours already devoted to other testing activities and more!
Great question, Thomas. Thanks for asking.
My go to “good” test: Did my session yield information that started a useful conversation and did this lead to a decision that helped my team move forward in the right direction?”. I feel I’m adding value if the answer is mostly yes.
I think it’s awesome you called out self-reflection. Such an important part of improving our skill as exploratory tester. Sometimes “doing better” is as simple as running another charter/session. And this is why I prefer short time boxed sessions, say 30 to 45 minutes. My feedback loop is short if I know I could’ve done better.
I once worked in a team where at the end of a time-boxed testing session I’d debrief my testing notes in person with another tester – preferably as close to as soon as I’d finished my session. I found this an incredibly useful way to get instant feedback on my approach and discoveries. Particularly useful when I first joined the team.
I’d love to find a simple way to track exploratory testing effectiveness over the course of a project. And maybe that’s as simple as counting the velocity of testing sessions. Diving into testing metrics is an interesting topic that perhaps warrants a whole power hour!
Though I’ve never done this before, perhaps there’s an opportunity to use a Net Promoter Score (NPS) approach. For example, take a useful sample set of colleagues and ask: “On a scale of 1 to 10, how likely are you to recommend my exploratory testing skills/services?” (Where 10 is a slam dunk “Always” and 1 is a “No chance!”. And run this periodically to track trends.
Here’s a treasure trove to get started. Marcel Gehlen provides a total link fest of exploratory testing material – including plenty of “intro to”. It’s well worth checking out, even if you don’t get through all the links: Pathway Exploratory Testing.
Number one is memory. If that’s working great, a lot of the other stuff is less necessary. But let’s assume memory is fallible.
- browser developer tools: I end up looking the most at the Network tab for things I’m working on at the moment, but don’t underestimate all the other stuff you can use in there.
- screenshots: I use the Mac built-in Cmd + Shift + 4 keyboard shortcut for crosshairs, then edit them with arrows, boxes, and text in Preview. I like to use a neutral color that doesn’t scream “You did something wrong” but still draws attention.
- animated GIFs: They can be better than screenshots. JIRA shows them rotating. I use LICECap despite its off-putting name: https://licecap.en.softonic.com/
- PyCharm: My IDE for writing Python tests. At least half the mistakes I would otherwise make get caught by auto-complete, syntax highlighting, and error highlighting.
- Mindmaster: For mindmapping.
- pen and paper: For everything else.
I like Michael Bolton’s page to understand what it is we’re talking about, Elisabeth Hendrickson’s description about how to do it well, and James and Jon Bach testing the Staples Easy button to show examples of how to decide whether what you’re seeing is expected or not. If you’re ready for a deeper dive, check out the Black Box Software Testing course materials.
The biggest thing that can help you figure out whether you’re doing good or bad testing is reflecting. Ask yourself: Did I repeat tests without varying anything and expect different results? Did I change so many variables that it was difficult to determine cause and effect? Will the information I discover be useful for the future?
Pairing or mobbing while exploratory testing can help you reflect both in the moment and have a separate accounting of events for reflecting later. If you’re testing by yourself, debriefing your testing with someone on your team will help you do better testing the next time. Here are two lists of things you could ask during a debrief:
Hi Thomas and Sharon, love your questions as testing notes are a big passion of mine!
My default approach is to use TestBuddy (a product in progress that I’m developing with @rajit). During a time boxed testing session I write down most of what I’m thinking and what I observe. Kinda like a newspaper reporter taking notes on the scene of a breaking story. I do this to give myself the best opportunity of remembering stuff to share with my target audience. They’ll also get an insight into why and how I explored and not just what I discovered.
I enjoy using the PQIP approach: I document Problems, ask Questions, share Ideas and give Praise for stuff I discover that I think is cool. So my notes are written in long form and tagged/labelled with a P, Q, I or P – well, I actually use iconography and colours to convey each word. And parts of my notes aren’t labelled if there are just thoughts or running commentary.
Here’s an example of a what that all looks like (311.6 KB) . I share a bit more detail about this approach on this post: What is Exploratory Testing? Four Simple Words to Level Up Your Testing Efforts
I don’t tend to vary my approach. I’m kinda bias to long note taking and whenever I try to do less or do something else, like use a mind map, I tend to find I’m missing out. It’s hard to break my current note-taking addiction. But of course I’m open to evolving such an approach. And no doubt it’ll evolve in some form.
Testing notes are the foundation for successful exploratory testing and without them I’d be lost.