Lets get rid of exploratory testing

First off, I’m not seriously suggesting that we get rid of exploratory testing. However, I’m interesting in hearing about any experiences where people have ONLY tested something using scripted testing.

I once moved to a new project, where the teams relied a lot more on strict and rigid manual test cases. There was little or no scope for exploratory testing.

I decided to follow their process before making suggestions about how to improve things.

Sticking to such a rigid process meant that bugs were found later and in some cases not at all. The focus was on running the tests precisely as they were written and in that order. Some bugs existed in the later test cases that were yet to be run, others involved steps that were not covered at all.

There was also an issue reviewing test cases, which almost drove me mad. The high number of detailed steps in each script made the script difficult to read, and the repetitive nature meant that mistakes were made.

Has anyone else had similar experiences.

I think scripts are important, but there should be a better balance between scripted and exploratory tests.


Been there! Very much disliked being so restricted. Scripts should be a guide with freedom to explore around the path. I try to write in an open way unless I’m explaining something tricky.

For example:
Sign in with valid details
Sign in with invalid details

These allow the tester to use their craft, skill and imagination to exercise the system. Much better than; put username bob@bob.com in the username field, tab to password… :roll_eyes:


In my last role, we worked to scripts (because we knew no better). But because it was a brand new application we were testing, I started drafting the scripts from the specification docs and was incorporating sub-scripts from the start that brought in some elements of exploratory testing before there was working software available. The company were astonished at this novel approach, and the CEO declared the first release “Best tested software ever!”. (Sadly, what we didn’t know was that the system requirements were so badly drafted - by consultants and someone no longer with the company - that the app didn’t survive first contact with Real World users. So it goes.)

In a previous role, I tested the software for biochemical assay machines - the devices that run blood tests in hospitals. The regulatory regime that the company had to work under required strict formal scripted testing for compliance reasons. Nonetheless, it was still possible to use those scripts for some off-piste exploration, only performing a formal scripted test run on specified releases that we were told were ready for sign-off. (Yes, I know that’s a term we’re not supposed to use these days.)

Even then, there was scope for some creativity. The completed installation was required to be used with touch-screen keyboards, to try to cut down on cross-infection in laboratory environments. When I finally got my hands on one, I carried out a series of regression test runs, again using the scripts, to see if the new hardware had any compatibility issues. This was useful, because the workflow required some real-time coding of sample ID labels, and I found that the drivers for these touchscreen keyboards didn’t support keyboard shortcuts useful in text entry (in particular, cut & paste commands such as Ctrl-X, Ctrl-C and Ctrl-V).

1 Like

At my workplace, we ONLY use scripted testing.

Honestly, it’s painful, and frustrating, but the “Powers That Be” don’t feel comfortable with exploratory testing as they (falsely) believe they can’t trace it back to requirements as well as satisfying all of our audit requirements.

The project I’m on is testing a database, to confirm customers who meet a specific set of criteria are identifed and deleted from the database. No GUI, no API, pure SQL.
I write a test matrix working out what test scenarios I can come up with, and then I have to make a detailed set of steps explaining wehat has to be done and what is expected, such as “Connect to database. DBA run X job, job runs successfully”, so it can be reviewed and put into Jira so that someone else can perform the execution.

1 Like

Hello Louise!

When I had Test Lead gigs, I actively coached testers away from writing scripted tests. Rather, I encouraged them to make a list of open-ended questions based on risks we wanted to explore.

I think the place for scripts may be in documentation. That is, if a tester needs to evaluate some behavior that exists on page 3 of a 5 page work flow, the documentation around the navigation to page 3 exists in the documentation. In that manner, the test can refer to the documentation and focus on the action under test.

In my opinion, having a tester write “Navigate to the Log In page” (and other similar lines to navigate to page 3) is administrative overhead (adds no value to testing) and executing that instruction as a test provides no information about what is interesting because you ain’t on page 3 yet.
Ask something like “On page 3, what happens when two check boxes are checked and the name field contains blank?” Therein lies the test. The tester can refer to the documentation, or make their own way to page 3.

I advocate a stroll through the woods over a comfortable walk on the path.



If you want to never find entire classes of bugs, don’t make your new employees write test scripts.

Tell them explore. I just started a new job, and am finding an entire class of UX issues by having 3 weeks to explore and prod the beast. I think my context is lucky, 100% of my test engineer jobs have NOT involved any web testing; I always work for someone building something that runs on hardware in some way. The upside is that most formal test tools don’t support our test cases, so much of the testing I eventually automate, is really hard to automate, and requires manual prodding on day 1. I’m a bit tired of exploring now, and want to spend a few weeks just automating though, because that’s what I do best. But I’m going to try carve out more exploring time that I used to do in the past.

I think that the jobs I have had that have been purely around interface (SDK) testing work have been lucky, because I can record every exploration easily and just paste that into notepad and run it as a test most of the time. Test frameworks that let you just run arbitrary code “as a test” are great for turning explorations into test cases.

I’ve transitioned my workplace away from purely scripted testing to a mix of scripted and exploratory, and am moving our scripted tests away from stepped tests to gherkin, so I’ve seen both sides of this.

I would say stepped tests (eg Step 1: Log In, Exp Result: User is logged in, Step 2: Navigate to… etc) offer almost no value over other forms of testing. Exploratory is my preferred style simply because it lets the solution and requirement lead the testing, rather than asking the tester to look into their crystal ball and discern precisely HOW the software will do the WHAT of the requirement. Rewriting stepped tests, cloning stepped tests only to add the word “not” or “don’t” in 16 places, etc etc… I don’t miss those days!

Scripted testing has a place for sure, rigid testing of the requirements make sense to be documented in a script. Currently we script in Gherkin, meaning we can manually execute the scripts, then should we see value in doing so, automate. Can’t automate a stepped test, so why not get a step ahead (pun definitely not intended!)?

In terms of the review process, we have agreed a standard mandating all tests of any style have a “purpose” stated as the first part of the summary, for example “The purpose of this test is to ensure login validation fails for invalid credentials”. Our test review process only involves reviewing these “test intents” - frankly a review once the test is written is too wasteful, too late in the process. Begin testing your tests earlier and instead test the ideas, then trust your testers to “fill in the blanks” when it comes to writing or executing the tests (and review the output of that with the team once it’s done!).

It would be wrong to say there is nothing which can be gained from doing any type of testing, but I think for those of us who have been (or who still are) locked in the “bad old days” of stepped, scripted tests, we’d be wary of seeing them as much more than a “bit of fun” exercise to educate others on how tedious, wasteful and laborious testing can be. I do think it’s pretty cool you live in a world where the idea of doing something other than exploratory represents a break from the norm, that’s a sign things are headed in the right direction for me.

Thanks for the interesting topic!


When scripted testing is required I quickly reverted to doing this:

  • I would use exploratory testing and session based testing anyway.
  • I would check my notes for which scripts were directly covered.
  • For any potential issues I found, I would add these tests after the fact.
  • I would “fill in the blanks” for any required test which was not covered by exploring by actually doing the scripted stuff.
  • I would loudly complain to anyone who listened about how few issues were found by “following the script” and use my sessions as evidence.
  • I would loudly complain to anyone who listened about the waste of time in maintaining the scripted test documentation.
  • I would loudly complain to anyone who listened about the lack of good reviews for changes in the documentation. People-who-mattered wouldn’t go near the docs, which meant that they were either poorly reviewed, checked off without an actual review, or simply not reviewed.

This situation happened at two different companies where I worked. The first time, they allowed me to customize our testing process to avoid too many scripted tests. It helped that the test lead who came up with the script documentation left for greener pastures. The second time, they didn’t allow me to customize anything, and I had to keep on keeping on. I left them for greener pastures.


If management are insisting that testing must follow The Script and not deviate from this Path of Righteousness, smile sweetly and ask them whether real-world users will be similarly constrained. One thing I remember hearing early on in my testing career was “if you want to create a completely foolproof system, do not underestimate the ingenuity of complete fools”, for which I’d say exploratory testing is far better than trying to script every single possible way that a user could abuse the system under test!



1 Like

Most of my career has had that restriction, usually because of satisfying contractual deliverables of test evidence rather than a software quality focus.

Fortunately I’ve been able to implement a simpler way of looking at things:
Scripted tests = prove the “correct” use case (happy path)
automated tests = automate the “valuable” scripted tests
Exploratory tests = Find bugs (which may lead to generating scripted tests if we believe they’re valuable)


Lately becoming a fan of your approach Gary.

Have heard of people talk about pillars of testing, these 3 seem to come up often. (I think there are 5, sometimes 8 in total.) Size will vary based on kind of system, but I like to use each one in a way that also assists the other ones.


Totally agree with Gary’s approach as well - we are in the process of moving all our scripted testing to gherkin (supported by “how to” guide articles on Confluence) for this reason, so the team has freer rein to make decisions about what to automate.

I’d say we’re at about a 50/50 ratio for scripted vs exploratory, I’m pushing to go more towards exploratory overall. Our quality has never been higher in the 4 years I’ve been with the company.

1 Like

I wonder how people can create test scripts (maps) without exploring…
Even if we take into consideration TDD, there would be exploration of requirement documents, people’s claims, data, etc.