Exploratory
Exploratory testing generally means two things.
The people that developed and expounded the term look at all testing as exploratory, as their definition of testing includes it. It then becomes about what scripting you choose to employ, where scripting is something that influences your testing but you donāt have control or choice about, one example being instructions that you must follow like written test case steps. The idea is that such scripts can be helpful but they are additions to testing, not replacements for it.
Thereās also what you probably see more written about, Exploratory Testing (I use capital letters to differentiate it), which is generally some time a tester puts aside to reduce the amount of time, energy and focus they use on scripted factors. I donāt personally think that the term is particularly helpful, because I prefer to think of myself as the tester, and anything I want to use as a cost/benefit decision. I donāt put time aside to take responsibility for my testing, I just do it, whether that involves scripts or not. I donāt consider automation as software that does testing, I see it as a tool for which the inputs are a responsibility, questioning the purpose of the internals is a responsibility, considering the risks and limitation is a responsibility and interpreting the output is a responsibility. I will also basically never choose to use written step-by-step instructions because I see them as having enormous cost and risk with little benefit. Also Iām an engineer - if I stop solving problems I start creating them.
Forgive me if that seems a bit naive
It doesnāt. The field is rife with both good ideas and many ideas without much research. Itās confusing and difficult, and the industry can be very tolerant of mediocrity, usually through ignorance rather than design, so we have an ocean of possibilities to consider. I spent a decade working in and on the field of testing, this is just the stuff Iāve collected while I was there. Ask me about the history of fashion and I will be asking what a culotte is - which I also had to look up the spelling for and still didnāt check the meaning. The worst part about testing is when you find out thereās a lifetime of reading to do, but luckily, if you like testing, it becomes the best part.
Incidentally Iām obviously not the end of the conversation for testing, you should take ideas from everywhere and make informed decisions.
Replacing Instructions
Traceability is basically being able to tie what youāre doing to why youāre doing it. Going off test instructions means that someone else has made the connection between requirements (written/explicit or otherwise) and the testing. The problem is that if you donāt take responsibility for that connection you cannot understand what youāre trying to do, which limits your ability to think of new problems or otherwise spend your time wisely. That means each tester has to be able to take requirements (written/explicit or otherwise) and make their own decisions about things like coverage. It becomes part of the job.
If you need to communicate what needs to be done, you could do that via charters, which are like high-level instructions that focus on purpose and risk. A charter might read āTest that the login screen rejects non-valid logins. Focus on security concerns and other misuseā or, honestly, anything that you might need to communicate to someone to look into. The tester then tests against that charter, as their mission, with their domain knowledge and contextual understanding, and can tie back what they choose to do to their mission, and also to wider requirements. If youāre interested in being able to better trace the value of what you are doing while testing to your overall mission then I would recommend test framing as a place to start.
The advantages here are enormous, as testers can be told more simple instructions, at any level, and good testing can come out of it. If you have a hierarchical system of testers you will still need the upper tier to be good communicators and have a good understanding of risk and requirements, but the dangers of all of these are lessened. Coverage becomes adaptable, and self-healing as more problems are found. Risks become contextual rather than general, meaning you waste less time testing pointless things. The responsibility of how things are done shifts to skilled human testers, and allows them to use what they learn to improve what they do.
If youāre trying to move to a more deformalised system thereās a few ways to go about that, you could move towards checklists, down the formality scale, including notes for anything that feels particularly important for whatever reason. This also helps us to think about risk more deeply. You could do some kind of coverage map, or move directly to charters, depends on your situation.
Session-Based Test Management is one way to document and manage testing. I tend to use a session sheet to document my own testing, but rarely use associated metrics and whatnot because itās time I could be spending doing testing. Being able to communicate the story of what you did, what you know, what could be a problem, what you need, etc is its own skill, but comes pretty quickly. I used to write my notes up in OneNote, export as a PDF and attach it to a JIRA ticket, for one example, and this seemed to work well.
Another advantage of these free form notes is that I can use whatever I like to guide my testing instead of instructions. Charters, checklists, risk catalogues, user stories, older charters or ones from someone else. I can also record notes however I like. Screenshots, animated GIFs, screen recordings, associated test data, database records, anything that seems pertinent. If I wanted to and I had the setup to do so I could describe a test I did by including a virtual machine with the exact setup in it and a video of me doing it. Thatās not usually worth doing, but it shows the possibilities.
Regression tests usually feel like theyāre more powerful than they are. Our ability to repeat a test is far from absolute. Repeatability is its own topic, but if youāre trying to deformalise regression testing I always think that the conduit between formalism and informalism is purpose. Translate the cases into purpose - what risks are the cases trying to mitigate? Sometimes youāll find there is no purpose, sometimes youāll find that they automate very easily, and some youāll find that your testing is insufficient or wrong. When you decide on a system to mitigate change risk in a sensible way then that will cut down on attempts to retest every corner of your product, and you can cover purpose instead.
Donāt forget that while manual cases look appealing, and comforting, the risks still exist, in terms of miscommunication, time expense, limiting exploration and so on. Youāre not replacing a perfect system. You need to be able to let go of attempting to replicate situations with cases, and move towards trusting the ability of the testers. A good way is to communicate risk and mission more clearly, perhaps even with a checklist. You will also need testers who commit to being good at testing, because it will take motivation for some people to go off-script. It is more engaging and fun, though.