Finding a healthy mix of investigative and scripted tests

From an RST point of view exploration just means that the choices you make at any point are influenced by what you have learned during the process. In that sense there’s no testing where we don’t explore.

I’m going to differentiate the tautology “exploratory testing” and the low-formality charter stuff as “Exploratory Testing” or “ET” hereon in.

In your original share you said:

For investigative testing, such as Exploratory

So I assume you mean ET is a subset of investigative? Therefore the discomfort you might be feeling with the term is with the “exploratory nature of testing” versus ET.

I think that Bach’s writing on this helps, when he says:

“… the phrase “exploratory testing” is largely redundant, some testing is especially exploratory (i.e. informal, decided moment by moment by the tester), while some testing is especially scripted (i.e. formal, determined by someone else or at some earlier time). The testing process is always some mix of the two approaches.” - Bach

And hopefully that might clear up the adaptors you need to use to wire your terms with the same concepts. ET is just especially exploratory testing. And here’s some good news for the term Exploratory Testing, where Bach writes:

“In RST, we don’t necessarily use the term exploratory testing, but if testing is very informal, calling is exploratory can be okay; just as if it is very formalized, calling it scripted is reasonable. In general, I think it is more helpful to ask in what way any given testing is exploratory and in what way that same testing is scripted– then ask if that mix makes sense.” - Bach

Where you say:

do we mean investigative and confirmatory?

I don’t know, because I can only give you what that means to me. The more I read it the more I think investigative to you means a kind of Exploratory Testing to me - testing with less formality imposed by explicit scripts. In that sense we’re in agreement in most ways that matter for terminology. Where you say “I recommend a healthy mix of investigative and scripted testing” I say “explicit scripts should be used, as much as necessary for the context, with the understanding of the structure and limitation they impose on us and our testing” and I believe we likely mean basically the same thing. Your “healthy mix” is my “as much as necessary”. Of course my history post was about the detail of why, and hopefully to put the terms’ origins into context, in the hope that it’s at all helpful to your thread. Unpacking the terms hopefully helps. I’m also willing to go full Socratic with you if you want, to anti-fragile the falsification out of it, but I’ve learned a lot about conversational consent in the last few decades so I’ll only do that if you want it and with a safe word.

Where you say:

I think exploration implies the sense of intent, that we may get from charters or scenarios, and a focus on uncovering unknowns.

I think all kinds of testing has intent and a focus on uncovering unknowns. Without intent there’d be no impetus to test and with regard to uncovering unknowns; testing is science - a series of experiments that test hypotheses to discover new things in a reliable way - and uncovering unknowns is the goal in either case.

The feeling I get is that ET is perhaps something you think of as defocused, although again I could be wrong. It’s lower-formality testing, and because of that it has the capacity to approach testing in a defocused way (multiple-factor-at-a-time, broad and varied observations, many models, challenging actions) it has a reputation for finding new, unexpected, elusive and more problems. There’s nothing about ET that forces people to be defocused, though, you still need focus (one-factor-at-a-time, precise observation, start from known state, follow established procedure) to provide test integrity - to believe with greater certainty the link between what you see and what you believe based on your observations. I found this a lot when chasing down a bug - you can play around to find a problem but knowing what caused it and when it occurs takes focused testing, through repetition and changing one thing at a time until you’re left with a more accurate understanding of the behaviour. We can do informal testing, as in a non-specified way and not to verify specific facts, and still be deliberate, structured, explicit and exacting. It’s just that we have to decide to do those things whenever is appropriate without the added aid or restriction from the structure that explicit scripts impose. That’s what makes it Exploratory.

Exploratory Testing is also not just cocking about, obviously. It has structure controlled by a responsible tester, and requires knowledge and skill to be done well. The control of the tester determines things like focus in testing and when it is valuable or necessary to include it. It’s really just a kind of testing that doesn’t have many explicit scripts in it.

One problem is that my version of the definition of Exploratory Testing is an invention of RST and context-driven ideas. It’s taught in other places, for example the ISTQB definition is a partly stolen one, from Bach’s definition, with some odd changes. Coming to a consensus can be tricky, but hopefully I’ve given enough insight to show where my & RST thinking fit in.

I think we can carry out investigative testing, where we don’t explore, and confirmatory testing that doesn’t use scripts.

If I’m taking your meaning correctly you’re taking explore here to mean going off-trail. To colour outside the lines. To, essentially, do stuff the mission did not suggest that we do. I think it’s a fair interpretation of the term, as exploration has a sense of adventure without rules, and new horizons, and I’ll cover it here whether you meant that or not, just to have it said:

The exploratory/scripted continuum is really a matter of the formality of the testing. Formality is followed, like a list of instructions, and informality means that the person chooses at the time what to do, like a charter. The intent is given by trying to program a person or computer to repeat actions, like old formal test cases with test steps or automation, or by giving a person a goal and letting them reach it on their own, like a charter. If you perform a series of actions you have pre-determined, and then force yourself to stick to them as rigidly as possible, you can achieve a sense of not going off-trail, but not only is that a terrible way to perform testing but the decision point where you pre-determined what you were going to do is a decision you made based on what you understood about previous experience - context alone would affect your decisions in a very fundamental way, or being exposed to the product just once. You can only write high-formality tests to test a login page if you know the product has a login page. So sticking to the trail (not “exploring”) in that sense then becomes a matter of who designed the trail. Even if we say that someone else designed the trail and I, as a truly pedantic tester, do everything I can to execute the instructions given to me as written what happens when something occurs that the trail didn’t consider? If the power goes out do I still try to click the buttons and type in the fields? Or more realistically if a bug occurs that prevents my continuing with the instructions, then I stop and investigate, or if odd behaviour turns up I might take a quick look, even if I can’t provide evidence for doing so in my instructions. So it’s very hard to not explore even in this sense - and moreover the harder we try the worse our testing suffers for it. It’s interesting to note that the more I allow myself to ignore scripted instructions the less scripted the testing is. It’s even more interesting to ask this question: what are the scripts doing for us that makes us waste so much energy and time like this?

That all being said I’d say that if we understand that testing is necessarily concerned with decisions we make determined by previous findings then the terms are secondary. The important thing is to do good testing, and to do that to make good decisions, and to do that understand that we control our testing, our tools, our explicit scripts, and the better we know when and how we introduce cost, risk and limitations when we try to save time, create faux-repetition and introduce test integrity the better we can make those decisions.