Power Hour - Exploratory Testing

Great question, Thomas. Thanks for asking.

My go to “good” test: Did my session yield information that started a useful conversation and did this lead to a decision that helped my team move forward in the right direction?”. I feel I’m adding value if the answer is mostly yes.

I think it’s awesome you called out self-reflection. Such an important part of improving our skill as exploratory tester. Sometimes “doing better” is as simple as running another charter/session. And this is why I prefer short time boxed sessions, say 30 to 45 minutes. My feedback loop is short if I know I could’ve done better.

I once worked in a team where at the end of a time-boxed testing session I’d debrief my testing notes in person with another tester – preferably as close to as soon as I’d finished my session. I found this an incredibly useful way to get instant feedback on my approach and discoveries. Particularly useful when I first joined the team.

I’d love to find a simple way to track exploratory testing effectiveness over the course of a project. And maybe that’s as simple as counting the velocity of testing sessions. Diving into testing metrics is an interesting topic that perhaps warrants a whole power hour!

Though I’ve never done this before, perhaps there’s an opportunity to use a Net Promoter Score (NPS) approach. For example, take a useful sample set of colleagues and ask: “On a scale of 1 to 10, how likely are you to recommend my exploratory testing skills/services?” (Where 10 is a slam dunk “Always” and 1 is a “No chance!”. And run this periodically to track trends.


Thanks, Abir.

Here’s a treasure trove to get started. Marcel Gehlen provides a total link fest of exploratory testing material – including plenty of “intro to”. It’s well worth checking out, even if you don’t get through all the links: Pathway Exploratory Testing.


Number one is memory. If that’s working great, a lot of the other stuff is less necessary. But let’s assume memory is fallible.

  • browser developer tools: I end up looking the most at the Network tab for things I’m working on at the moment, but don’t underestimate all the other stuff you can use in there.
  • screenshots: I use the Mac built-in Cmd + Shift + 4 keyboard shortcut for crosshairs, then edit them with arrows, boxes, and text in Preview. I like to use a neutral color that doesn’t scream “You did something wrong” but still draws attention.
  • animated GIFs: They can be better than screenshots. JIRA shows them rotating. I use LICECap despite its off-putting name: https://licecap.en.softonic.com/
  • PyCharm: My IDE for writing Python tests. At least half the mistakes I would otherwise make get caught by auto-complete, syntax highlighting, and error highlighting.
  • Mindmaster: For mindmapping.
  • pen and paper: For everything else.
1 Like

I like Michael Bolton’s page to understand what it is we’re talking about, Elisabeth Hendrickson’s description about how to do it well, and James and Jon Bach testing the Staples Easy button to show examples of how to decide whether what you’re seeing is expected or not. If you’re ready for a deeper dive, check out the Black Box Software Testing course materials.

1 Like

The biggest thing that can help you figure out whether you’re doing good or bad testing is reflecting. Ask yourself: Did I repeat tests without varying anything and expect different results? Did I change so many variables that it was difficult to determine cause and effect? Will the information I discover be useful for the future?

Pairing or mobbing while exploratory testing can help you reflect both in the moment and have a separate accounting of events for reflecting later. If you’re testing by yourself, debriefing your testing with someone on your team will help you do better testing the next time. Here are two lists of things you could ask during a debrief:

1 Like

Hi Thomas and Sharon, love your questions as testing notes are a big passion of mine!

My default approach is to use TestBuddy (a product in progress that I’m developing with @rajit). During a time boxed testing session I write down most of what I’m thinking and what I observe. Kinda like a newspaper reporter taking notes on the scene of a breaking story. I do this to give myself the best opportunity of remembering stuff to share with my target audience. They’ll also get an insight into why and how I explored and not just what I discovered.

I enjoy using the PQIP approach: I document Problems, ask Questions, share Ideas and give Praise for stuff I discover that I think is cool. So my notes are written in long form and tagged/labelled with a P, Q, I or P – well, I actually use iconography and colours to convey each word. And parts of my notes aren’t labelled if there are just thoughts or running commentary.

Here’s an example of a what that all looks like (311.6 KB) . I share a bit more detail about this approach on this post: What is Exploratory Testing? Four Simple Words to Level Up Your Testing Efforts

I don’t tend to vary my approach. I’m kinda bias to long note taking and whenever I try to do less or do something else, like use a mind map, I tend to find I’m missing out. It’s hard to break my current note-taking addiction. :slight_smile: But of course I’m open to evolving such an approach. And no doubt it’ll evolve in some form.

Testing notes are the foundation for successful exploratory testing and without them I’d be lost.

1 Like

I’d recommend the resources Simon and I mentioned here: Power Hour - Exploratory Testing.

Though Simon’s answer also reminded me of Katrina Clokie’s Testing for Non-Testers Pathway.

Thanks for your question, Rosie.

My go to tool for triggering ideas for an exploratory testing session is the Heuristics Cheat Sheet. An epic and snappy all-in-one companion. It just gives me a little nudge and reminds me, “Oh yeah, I could do that!”.

There’s also tonnes of useful tools on this thread.

Plus there’s the terrific TestSphere. It flips your brain into many different exploratory testing positions. And with such variety you tend to have more options available to trigger test approaches for a productive exploratory testing session.

When testing a web application my go to tools are Chrome’s developer tools, application logs and an excellent screen recorder that churns out GIFs, such as LICEcap. Super important to explore the Console and log output as you explore. The app might appear to be in good shape for the user but below the surface may lurk some problems. I also use my own tool for managing my testing sessions and capturing testing notes: TestBuddy.

1 Like

My default is pen and paper. I take notes linearly. This helps when I want to remember what sequence things happened in, but not if I forget where I was/what I was doing when they happened. When I think to, I leave space for non-linear additions. Writing things down helps me remember them better, but I only write enough down for me to remember later. Someone else could not make sense of my notes.

When I notice something that’s off-topic for the session but I want to follow up on, I go grab my notebook. If there are things that are hard to write down (UUIDs) or that the developers will want to reproduce with themselves (URLs, JSON request body, etc.), digital notes win. If there are more visual things, I use animated GIFs or annotated screenshots.

1 Like

We can and we should! But that wasn’t your question.

Use examples. Show people what you’ve found through exploratory testing. Show people what you wanted to explore but didn’t have the time, tools, expertise, etc. to. Connect what you’re doing back to the value to the business.

Here are some ideas to get started:

  • Pair and learn with someone who has solid practical experience with exploratory testing techniques and mindset. Perhaps do what Elisabeth Hocke did and pair with folks outside of her company via a Testing Tour
  • Read James Bach and Michael Bolton’s Exploratory Testing 3.0 article. It’ll likely trigger something in you that you wanna continue exploring
  • Watch this 8 minute video: Start Exploratory Testing Today. We put this together to give the viewer a practical exploratory testing technique to try out for real.
  • Read Chris Kenst’s super quick intro to exploratory testing
  • Explore the Exploratory Testing category of this here forum. How meta! :metal:
  • Post a tweet: “I’m new to exploratory testing! Where should I start?”. The twitterverse of testing community is incredibly welcoming and will be sure to share some excellent ideas. The reach will likely provide a variety of useful perspectives.
  • Try this. Take your next story/feature to test. Set a timer for 30 minutes. Explore the product without any test cases or test scripts. Explore something specific and write down everything you think and discover. Stop at 30 minutes. Reflect on the experience. You made your first step towards structuring beyond just ad-hoc or random “try to break it testing”.
1 Like

Yes, both @simon_tomes and I worked at Medidata, which is producing clinical trial software regulated by the FDA in America and several other governing bodies I now forget the names of in Europe and Asia. For us, there was a bigger emphasis on documenting our charters on our stories. We wanted evidence that particular risks were explored. But the style and level of documentation wasn’t specific. Find out what your regulators look for, and don’t provide more paperwork than they need!

This is an excellent question, Rosie. And I think it’s a super important one to ask ourselves as often as we can.

I see two potential barriers:

  1. Folks are concerned they might get it wrong (whatever that means!) i.e. say something about the value of exploratory testing that isn’t pitch perfect to some person’s view
  2. Folks are worried about sharing in creative ways in case they look silly or a message doesn’t land with their audience

It’s completely understandable why these two blockers exist – well at least in my world. I sometimes feel like this every day. But I check myself and remind myself that I have thoughts to share and questions to ask. Exploring exploratory testing is exciting and I encourage people to share their stories about how they find it valuable. You never know who you might inspire – and even if it’s one person then it’s totally worth it! There’s a huge opportunity for the testing community to inspire people with their exploratory testing stories in other amazing tech communities across the globe.

Recently I’ve been experimenting with short snappy videos on Twitter. The 2 minute 20 second time limit constraint is an excellent way to force you to get to the point. I’d love to see more people give this sort of thing a go! And to hell with what anyone thinks. The more I share the more I learn and the more I learn the more I share.

I’m curious about what’s making you perceive your exploratory testing efforts as unsuccessful. And why a tool would be the answer. I don’t know what extracting scenarios means. I agree I’d want to be able to analyze events from a failure; on my current project I use the browser network console and the application logs for this. What do you use now?

Hi Nick. Thanks for your questions!

I once worked as an exploratory tester at a software house that developed applications for the healthcare industry. And so did @ezagroba. :slight_smile:

It was super regulated yet there were a tonne of talented exploratory testers across multiple sites. It was so cool to see this as there was a real drive to leverage the value of exploratory testing to uncover risks that test cases would unlikely not discover. Exploratory testing worked in conjunction with scenario based acceptance tests which were written by the exploratory testers and typically implemented as automated checks by engineers/developers.

Audit teams were keen to see the acceptance test scenarios as evidence of quality checks for regulation purposes. I can’t actually remember if they viewed exploratory testing notes or not. I don’t think they did, even though those notes were attached to the issue/project tracking software.

Some tips come to mind:

  • With any change I think it’s important to establish what’s happening right now before jumping in with a vision to do something different, if indeed that’s a vision or goal
  • Try the smallest exploratory testing session alongside existing testing approaches
  • Attempt to share the value of exploratory testing as a partner to non-exploratory testing techniques. Sharing real-life examples will help. For example, “Hey, this set of scripted checks found these bugs and this set of exploratory testing charters helped us discover these unknowns and also the following problems. We also used exploratory testing techniques to review stories and system diagrams before we wrote a line of code. It helped us turn some implicit information into explicit scenarios for our acceptance tests.”

I’m exploratory testing an API right now at my job! The other day, I was pairing with a developer to write a Python test that would call a particular API endpoint. We looked at providing a string just over the character limit, a really long string, an empty string, punctuation characters, and a string that didn’t match the correct UUID. We only saved the last test, because the others were behavior characteristic of the software we were using to build our application.

Does that kind of thing sound different from testing a web app? I don’t think so. Let Elisabeth Hendrickson’s Test Heuristics Cheat Sheet inspire you.

@j19sch and @nufenix had a similar question about testing deeper layers of stuff. I think you can still ask these kinds of questions about deeper levels of your stack:

  • What kinds of things do I usually see here? How could I see something different?
  • What have I never seen here? How could I trigger that?
  • What would happen if there were a thousand of these instead of just one?
  • Do these usually occur in sequence, or simultaneously? What if the opposite occurred? What if it were interrupted?
  • How did I get access to this system? What would be the easiest way for someone else to access this if they couldn’t already?
1 Like

Thanks for your question, Ashish.

I think structure is important for exploratory testing efforts. I typically go for this structure every time I go exploring:

  • Define a Test Exploration Goal (sometimes referred to as a Charter, Goal or Mission)
  • Set a time box
  • Start timer
  • Write notes of what I’m thinking and discovering
  • Tag notes with either a Problem, Question, Idea or Praise tag
  • Stop testing as soon as the timer stops
  • Review notes and tidy up
  • Share in person with someone or share a link to my notes for someone to review asynchronously

So in terms of tool characteristics I’ll go for:

  • Simple and lightweight with a little bit of basic structure and guidance
  • A simple way to set/start/pause/stop a timer
  • Quick tagging options
  • Easy note-sharing facilities

And for supercharging my exploratory testing efforts here are some ideas: Imagine automatically hooking up log outputs to your notes or being able to automatically repeat stuff that should be repeatable to help save time e.g. data setup. I also think it would be cool if an exploratory testing tool would offer up triggers when asked for one – like a digital version of TestSphere. Useful when you need inspiration for the next part of your exploration.

Thanks for your excellent questions, Amanda and Sharon!

Here are some ideas that hopefully address all your questions:

  • Work with your audience to find out what information is important to them and use that information to tailor how you share your exploratory test results.
  • Sometimes people aren’t sure what they want and that’s totally fine. Experiment with what works for you and share with your audience to see if it resonates. You might end up tailoring your output to each individual. That might take effort yet it might yield the best outcomes for making useful decisions that benefit your customers.
  • I think with any change it’s important to understand what’s happening in the current world. And to discover/reflect on that with your existing team. It might make it easier to offer up exploratory testing as a complimentary approach to any exiting approach to testing. Find someone who is curious by it all and share your experience with them. See where it might lead with small bitesize changes. Share success stories that directly link success to exploratory testing efforts. This might give you the best chance of navigating around any obstacles.
  • Pairing with someone who is new to exploratory testing can often ignite something that they might not have realised is ready and waiting to be discovered. And likewise for the person who isn’t new to exploratory testing!
1 Like

Hi Jose, thanks for your question.

As well as learning from @ezagroba’s story, thoughts and ideas, I’d totally start here: Exploratory Testing an API. Maaret Pyhäjärvi offers up an incredible amount of useful information.

1 Like

Hi Amanda, it sounds like your team is not on-board with the exploratory testing thing yet. I can’t say that’s been a mindset I’ve encountered too much at the particular companies where I’ve worked, but I do find people not being super interested in testing taking a long time or finding “edge cases” they don’t want to fix. This happens when I test without much feedback from the team.

Your question about documenting results sounds similar to Sharon’s.

The best way to share your results with the team is during a time when they’ll listen in a format they’re familiar with. This will vary! I’ve scheduled meetings with all the developers and product owners on a feature to discuss exploratory testing results for an hour. I’ve stopped by a developer’s desk to show them something weird on my machine. I’ve Slacked a question to get an idea if the thing I thought was weird was still in progress or purposely weird. I’ve reviewed exploratory testing charters at a sprint planning and had my developers drag the important ones to the top.

Set aside time each story, each day, each iteration, whatever you can spare. Showing the results of the interesting questions you get to ask will hopefully buy you more time for it in the future!

1 Like