Power Hour - Exploratory Testing

Great question, Thomas. Thanks for asking.

My go to ā€œgoodā€ test: Did my session yield information that started a useful conversation and did this lead to a decision that helped my team move forward in the right direction?ā€. I feel Iā€™m adding value if the answer is mostly yes.

I think itā€™s awesome you called out self-reflection. Such an important part of improving our skill as exploratory tester. Sometimes ā€œdoing betterā€ is as simple as running another charter/session. And this is why I prefer short time boxed sessions, say 30 to 45 minutes. My feedback loop is short if I know I couldā€™ve done better.

I once worked in a team where at the end of a time-boxed testing session Iā€™d debrief my testing notes in person with another tester ā€“ preferably as close to as soon as Iā€™d finished my session. I found this an incredibly useful way to get instant feedback on my approach and discoveries. Particularly useful when I first joined the team.

Iā€™d love to find a simple way to track exploratory testing effectiveness over the course of a project. And maybe thatā€™s as simple as counting the velocity of testing sessions. Diving into testing metrics is an interesting topic that perhaps warrants a whole power hour!

Though Iā€™ve never done this before, perhaps thereā€™s an opportunity to use a Net Promoter Score (NPS) approach. For example, take a useful sample set of colleagues and ask: ā€œOn a scale of 1 to 10, how likely are you to recommend my exploratory testing skills/services?ā€ (Where 10 is a slam dunk ā€œAlwaysā€ and 1 is a ā€œNo chance!ā€. And run this periodically to track trends.

2 Likes

Thanks, Abir.

Hereā€™s a treasure trove to get started. Marcel Gehlen provides a total link fest of exploratory testing material ā€“ including plenty of ā€œintro toā€. Itā€™s well worth checking out, even if you donā€™t get through all the links: Pathway Exploratory Testing.

2 Likes

Number one is memory. If thatā€™s working great, a lot of the other stuff is less necessary. But letā€™s assume memory is fallible.

  • browser developer tools: I end up looking the most at the Network tab for things Iā€™m working on at the moment, but donā€™t underestimate all the other stuff you can use in there.
  • screenshots: I use the Mac built-in Cmd + Shift + 4 keyboard shortcut for crosshairs, then edit them with arrows, boxes, and text in Preview. I like to use a neutral color that doesnā€™t scream ā€œYou did something wrongā€ but still draws attention.
    09
  • animated GIFs: They can be better than screenshots. JIRA shows them rotating. I use LICECap despite its off-putting name: https://licecap.en.softonic.com/
    testbashes
  • PyCharm: My IDE for writing Python tests. At least half the mistakes I would otherwise make get caught by auto-complete, syntax highlighting, and error highlighting.
  • Mindmaster: For mindmapping.
  • pen and paper: For everything else.
1 Like

I like Michael Boltonā€™s page to understand what it is weā€™re talking about, Elisabeth Hendricksonā€™s description about how to do it well, and James and Jon Bach testing the Staples Easy button to show examples of how to decide whether what youā€™re seeing is expected or not. If youā€™re ready for a deeper dive, check out the Black Box Software Testing course materials.

1 Like

The biggest thing that can help you figure out whether youā€™re doing good or bad testing is reflecting. Ask yourself: Did I repeat tests without varying anything and expect different results? Did I change so many variables that it was difficult to determine cause and effect? Will the information I discover be useful for the future?

Pairing or mobbing while exploratory testing can help you reflect both in the moment and have a separate accounting of events for reflecting later. If youā€™re testing by yourself, debriefing your testing with someone on your team will help you do better testing the next time. Here are two lists of things you could ask during a debrief:

1 Like

Hi Thomas and Sharon, love your questions as testing notes are a big passion of mine!

My default approach is to use TestBuddy (a product in progress that Iā€™m developing with @rajit). During a time boxed testing session I write down most of what Iā€™m thinking and what I observe. Kinda like a newspaper reporter taking notes on the scene of a breaking story. I do this to give myself the best opportunity of remembering stuff to share with my target audience. Theyā€™ll also get an insight into why and how I explored and not just what I discovered.

I enjoy using the PQIP approach: I document Problems, ask Questions, share Ideas and give Praise for stuff I discover that I think is cool. So my notes are written in long form and tagged/labelled with a P, Q, I or P ā€“ well, I actually use iconography and colours to convey each word. And parts of my notes arenā€™t labelled if there are just thoughts or running commentary.

Hereā€™s an example of a what that all looks like (311.6 KB) . I share a bit more detail about this approach on this post: What is Exploratory Testing? Four Simple Words to Level Up Your Testing Efforts

I donā€™t tend to vary my approach. Iā€™m kinda bias to long note taking and whenever I try to do less or do something else, like use a mind map, I tend to find Iā€™m missing out. Itā€™s hard to break my current note-taking addiction. :slight_smile: But of course Iā€™m open to evolving such an approach. And no doubt itā€™ll evolve in some form.

Testing notes are the foundation for successful exploratory testing and without them Iā€™d be lost.

1 Like

Iā€™d recommend the resources Simon and I mentioned here: Power Hour - Exploratory Testing - #4 by abir.

Though Simonā€™s answer also reminded me of Katrina Clokieā€™s Testing for Non-Testers Pathway.

Thanks for your question, Rosie.

My go to tool for triggering ideas for an exploratory testing session is the Heuristics Cheat Sheet. An epic and snappy all-in-one companion. It just gives me a little nudge and reminds me, ā€œOh yeah, I could do that!ā€.

Thereā€™s also tonnes of useful tools on this thread.

Plus thereā€™s the terrific TestSphere. It flips your brain into many different exploratory testing positions. And with such variety you tend to have more options available to trigger test approaches for a productive exploratory testing session.

When testing a web application my go to tools are Chromeā€™s developer tools, application logs and an excellent screen recorder that churns out GIFs, such as LICEcap. Super important to explore the Console and log output as you explore. The app might appear to be in good shape for the user but below the surface may lurk some problems. I also use my own tool for managing my testing sessions and capturing testing notes: TestBuddy.

1 Like

My default is pen and paper. I take notes linearly. This helps when I want to remember what sequence things happened in, but not if I forget where I was/what I was doing when they happened. When I think to, I leave space for non-linear additions. Writing things down helps me remember them better, but I only write enough down for me to remember later. Someone else could not make sense of my notes.

When I notice something thatā€™s off-topic for the session but I want to follow up on, I go grab my notebook. If there are things that are hard to write down (UUIDs) or that the developers will want to reproduce with themselves (URLs, JSON request body, etc.), digital notes win. If there are more visual things, I use animated GIFs or annotated screenshots.

1 Like

We can and we should! But that wasnā€™t your question.

Use examples. Show people what youā€™ve found through exploratory testing. Show people what you wanted to explore but didnā€™t have the time, tools, expertise, etc. to. Connect what youā€™re doing back to the value to the business.

Here are some ideas to get started:

  • Pair and learn with someone who has solid practical experience with exploratory testing techniques and mindset. Perhaps do what Elisabeth Hocke did and pair with folks outside of her company via a Testing Tour
  • Read James Bach and Michael Boltonā€™s Exploratory Testing 3.0 article. Itā€™ll likely trigger something in you that you wanna continue exploring
  • Watch this 8 minute video: Start Exploratory Testing Today. We put this together to give the viewer a practical exploratory testing technique to try out for real.
  • Read Chris Kenstā€™s super quick intro to exploratory testing
  • Explore the Exploratory Testing category of this here forum. How meta! :metal:
  • Post a tweet: ā€œIā€™m new to exploratory testing! Where should I start?ā€. The twitterverse of testing community is incredibly welcoming and will be sure to share some excellent ideas. The reach will likely provide a variety of useful perspectives.
  • Try this. Take your next story/feature to test. Set a timer for 30 minutes. Explore the product without any test cases or test scripts. Explore something specific and write down everything you think and discover. Stop at 30 minutes. Reflect on the experience. You made your first step towards structuring beyond just ad-hoc or random ā€œtry to break it testingā€.
2 Likes

Yes, both @simon_tomes and I worked at Medidata, which is producing clinical trial software regulated by the FDA in America and several other governing bodies I now forget the names of in Europe and Asia. For us, there was a bigger emphasis on documenting our charters on our stories. We wanted evidence that particular risks were explored. But the style and level of documentation wasnā€™t specific. Find out what your regulators look for, and donā€™t provide more paperwork than they need!

This is an excellent question, Rosie. And I think itā€™s a super important one to ask ourselves as often as we can.

I see two potential barriers:

  1. Folks are concerned they might get it wrong (whatever that means!) i.e. say something about the value of exploratory testing that isnā€™t pitch perfect to some personā€™s view
  2. Folks are worried about sharing in creative ways in case they look silly or a message doesnā€™t land with their audience

Itā€™s completely understandable why these two blockers exist ā€“ well at least in my world. I sometimes feel like this every day. But I check myself and remind myself that I have thoughts to share and questions to ask. Exploring exploratory testing is exciting and I encourage people to share their stories about how they find it valuable. You never know who you might inspire ā€“ and even if itā€™s one person then itā€™s totally worth it! Thereā€™s a huge opportunity for the testing community to inspire people with their exploratory testing stories in other amazing tech communities across the globe.

Recently Iā€™ve been experimenting with short snappy videos on Twitter. The 2 minute 20 second time limit constraint is an excellent way to force you to get to the point. Iā€™d love to see more people give this sort of thing a go! And to hell with what anyone thinks. The more I share the more I learn and the more I learn the more I share.

Iā€™m curious about whatā€™s making you perceive your exploratory testing efforts as unsuccessful. And why a tool would be the answer. I donā€™t know what extracting scenarios means. I agree Iā€™d want to be able to analyze events from a failure; on my current project I use the browser network console and the application logs for this. What do you use now?

Hi Nick. Thanks for your questions!

I once worked as an exploratory tester at a software house that developed applications for the healthcare industry. And so did @ezagroba. :slight_smile:

It was super regulated yet there were a tonne of talented exploratory testers across multiple sites. It was so cool to see this as there was a real drive to leverage the value of exploratory testing to uncover risks that test cases would unlikely not discover. Exploratory testing worked in conjunction with scenario based acceptance tests which were written by the exploratory testers and typically implemented as automated checks by engineers/developers.

Audit teams were keen to see the acceptance test scenarios as evidence of quality checks for regulation purposes. I canā€™t actually remember if they viewed exploratory testing notes or not. I donā€™t think they did, even though those notes were attached to the issue/project tracking software.

Some tips come to mind:

  • With any change I think itā€™s important to establish whatā€™s happening right now before jumping in with a vision to do something different, if indeed thatā€™s a vision or goal
  • Try the smallest exploratory testing session alongside existing testing approaches
  • Attempt to share the value of exploratory testing as a partner to non-exploratory testing techniques. Sharing real-life examples will help. For example, ā€œHey, this set of scripted checks found these bugs and this set of exploratory testing charters helped us discover these unknowns and also the following problems. We also used exploratory testing techniques to review stories and system diagrams before we wrote a line of code. It helped us turn some implicit information into explicit scenarios for our acceptance tests.ā€

Iā€™m exploratory testing an API right now at my job! The other day, I was pairing with a developer to write a Python test that would call a particular API endpoint. We looked at providing a string just over the character limit, a really long string, an empty string, punctuation characters, and a string that didnā€™t match the correct UUID. We only saved the last test, because the others were behavior characteristic of the software we were using to build our application.

Does that kind of thing sound different from testing a web app? I donā€™t think so. Let Elisabeth Hendricksonā€™s Test Heuristics Cheat Sheet inspire you.

@j19sch and @ThePirateTester had a similar question about testing deeper layers of stuff. I think you can still ask these kinds of questions about deeper levels of your stack:

  • What kinds of things do I usually see here? How could I see something different?
  • What have I never seen here? How could I trigger that?
  • What would happen if there were a thousand of these instead of just one?
  • Do these usually occur in sequence, or simultaneously? What if the opposite occurred? What if it were interrupted?
  • How did I get access to this system? What would be the easiest way for someone else to access this if they couldnā€™t already?
1 Like

Thanks for your question, Ashish.

I think structure is important for exploratory testing efforts. I typically go for this structure every time I go exploring:

  • Define a Test Exploration Goal (sometimes referred to as a Charter, Goal or Mission)
  • Set a time box
  • Start timer
  • Write notes of what Iā€™m thinking and discovering
  • Tag notes with either a Problem, Question, Idea or Praise tag
  • Stop testing as soon as the timer stops
  • Review notes and tidy up
  • Share in person with someone or share a link to my notes for someone to review asynchronously

So in terms of tool characteristics Iā€™ll go for:

  • Simple and lightweight with a little bit of basic structure and guidance
  • A simple way to set/start/pause/stop a timer
  • Quick tagging options
  • Easy note-sharing facilities

And for supercharging my exploratory testing efforts here are some ideas: Imagine automatically hooking up log outputs to your notes or being able to automatically repeat stuff that should be repeatable to help save time e.g. data setup. I also think it would be cool if an exploratory testing tool would offer up triggers when asked for one ā€“ like a digital version of TestSphere. Useful when you need inspiration for the next part of your exploration.

Thanks for your excellent questions, Amanda and Sharon!

Here are some ideas that hopefully address all your questions:

  • Work with your audience to find out what information is important to them and use that information to tailor how you share your exploratory test results.
  • Sometimes people arenā€™t sure what they want and thatā€™s totally fine. Experiment with what works for you and share with your audience to see if it resonates. You might end up tailoring your output to each individual. That might take effort yet it might yield the best outcomes for making useful decisions that benefit your customers.
  • I think with any change itā€™s important to understand whatā€™s happening in the current world. And to discover/reflect on that with your existing team. It might make it easier to offer up exploratory testing as a complimentary approach to any exiting approach to testing. Find someone who is curious by it all and share your experience with them. See where it might lead with small bitesize changes. Share success stories that directly link success to exploratory testing efforts. This might give you the best chance of navigating around any obstacles.
  • Pairing with someone who is new to exploratory testing can often ignite something that they might not have realised is ready and waiting to be discovered. And likewise for the person who isnā€™t new to exploratory testing!
1 Like

Hi Jose, thanks for your question.

As well as learning from @ezagrobaā€™s story, thoughts and ideas, Iā€™d totally start here: Exploratory Testing an API. Maaret PyhƤjƤrvi offers up an incredible amount of useful information.

1 Like

Hi Amanda, it sounds like your team is not on-board with the exploratory testing thing yet. I canā€™t say thatā€™s been a mindset Iā€™ve encountered too much at the particular companies where Iā€™ve worked, but I do find people not being super interested in testing taking a long time or finding ā€œedge casesā€ they donā€™t want to fix. This happens when I test without much feedback from the team.

Your question about documenting results sounds similar to Sharonā€™s.

The best way to share your results with the team is during a time when theyā€™ll listen in a format theyā€™re familiar with. This will vary! Iā€™ve scheduled meetings with all the developers and product owners on a feature to discuss exploratory testing results for an hour. Iā€™ve stopped by a developerā€™s desk to show them something weird on my machine. Iā€™ve Slacked a question to get an idea if the thing I thought was weird was still in progress or purposely weird. Iā€™ve reviewed exploratory testing charters at a sprint planning and had my developers drag the important ones to the top.

Set aside time each story, each day, each iteration, whatever you can spare. Showing the results of the interesting questions you get to ask will hopefully buy you more time for it in the future!

1 Like