Have you tried using mission-based charters?

Exploratory testing is all about discovering useful information, and one way to guide that discovery is by using a mission-based charter. They help you anchor your session, give you a focus, and make it easier to reflect on what you’ve learned.

If you’ve not tried one before, give this challenge a go!

Your challenge (if you choose to accept it):

Run an exploratory testing session using a mission-based charter.
Capture your observations, ask questions, and structure your notes in a way that helps you reflect.

Steps:

  1. Explore any application or website — maybe something familiar or something from the 75+ Testing Practice Websites list on the MoT site.

  2. Set up your note-taking structure — Use your own structure or try this three-part format.

    • Mission: Explore <target> using <resource> to discover <information>
      from Elisabeth Hendrickson’s book Explore It!
    • Summary: Summarise your key findings, observations, and open questions to share with your team. Write this last, but keep it above your full notes, so it’s the first thing your reader reads.
    • Notes: Record your observations, questions, and answers as you test. Consider structuring this as a question-and-answer format (e.g., “What happens if I submit a blank form?” and then note the result).
  3. Run your exploratory testing session

    • Use your mission as a guide, but remain flexible. Explore the feature, investigate unexpected behaviour, and take notes on anything that stands out.
    • Ask questions as you test, such as “What does this do?”, Why is this designed this way?”, “Is this behaviour intentional?”, or “What happens if I try this differently?”
  4. Review and summarise your notes

    • Identify key observations that could lead to further investigation.
    • Note any usability, accessibility, or functionality concerns.
    • Highlight questions that you’d want to discuss with your team.
  5. Share what you uncovered! Post your structured notes here!

    • Your mission and summary
    • A few of the questions you asked
    • Any observations, surprises, or bugs
    • Anything you’d want to test further

Exploratory testing is even better when it’s collaborative! That’s why I run Software Testing Live! — so testers can learn by watching each other test in real time. See the events page for the next upcoming session and the talks archive to see and learn from past recordings.

5 Likes

We have specialised in exploratory testing since 2001 and developed our own methodology and approach to planning and tracking test coverage. 15 years ago, we had a very large project for which we decided a more formal approach would be beneficial so we used James Bach’s Session Based Testing (SBT) approach (more for our client’s benefit than our own, and it undoubtedly gave us an edge during the bidding process against our more conventional competitors).

Session management
James uses (or used) a Perl application and text files for managing the sessions, but we didn’t like the look of it so we created an equivalent system using Excel files for the Master Charter List and the session files. We created a desktop application that queried all the Excel files to create the required management reports.

How it went
The transition from freestyle exploratory testing to timeboxed charters with a specific scope was certainly challenging, and it required discipline to stick to the time allowances and to save out-of-scope test ideas for the debriefing sessions.

We didn’t like having to create the charters at the beginning, which is when you know the least about the system. Of course, that’s the exact same problem you have when writing test scripts at the start of a project (not that I would ever do that). Part of the SBT approach is that you include some unassigned sessions (perhaps 10% of the total number) that you can use for things you discover during the testing, but we soon used them all up.

I don’t think any of us enjoyed using SBT or thought it improved the quality of our testing, but the structured approach and periodic progress reports certainly gave our client confidence in our work.

Management overhead
James reckons that the session-based approach adds an overhead of about 30%, and that aligns with our experience. That, and the lack of benefits to the testers, is why we never used this approach again. That’s not to say that some teams wouldn’t benefit from it, but we were already very good at managing large exploratory testing projects