Ask Me Anything: Exploratory Testing

Tonight weā€™ll be joined by the wonderful @cakehurst for an Ask Me Anything session all about exploratory testing.

Iā€™ll make sure to add any questions that we donā€™t get to and any resources mentioned during the session to this thread.

If you miss the live session, a recording will be available on the Ministry of Testing website for all Club level members once weā€™ve edited it and added captions.

Have you got any more questions youā€™d like to ask after? Add them here :grin:

6 Likes

Resources mentioned in the session

Mind map tools mentioned in the chat

Dan mentioned @lisihockeā€™s talk in the chat

Lisiā€™s blog has lots of info https://www.lisihocke.com/ you can also see her pairing posts A Tester's Journey: Testing Tour

@friendlytester wrote about heuristics

@maaret was hailed as Exploratory Testing royalty :grin: You can follow Maaret on Twitter https://twitter.com/maaretp Maaret has lots of resources on Exploratory Testing here - Exploratory Testing Index | Maaret PyhƤjƤrvi

Clumsy - clumsy, an utility for simulating broken network for Windows Vista / Windows 7 and above

Questions we didnā€™t get to

  1. @simon_tomes: How do you convince people that exploratory testing is valuable?
  2. @maaret: How do you teach new people to do exploratory testing?
  3. @maaret: How do you know how much time to have for exploring when you add a new feature?
  4. @thomjr: I find it difficult to explore when multiple systems are involved in a larger business process, maybe due to the longer feedback cycles. Do you have any tips how to make it feel less like executing test cases?
  5. @akshayagupta: in which phase exploratory testing fit? And who is responsible for that? What tooling required to implement?
  6. MVujcic: Can you show an example of your mind map?
  7. @thomjr: How would you characterise Exploratory Testing? And how does it differ from ā€œother testingā€?
  8. @thomjr: What are the things you typically document in your exploratory testing notes?
  9. Barry E.: Considering the tendency for minimal documentation when exploratory testing, how do you transfer your test knowledge of your product to a new team member?
  10. @zuzanabadlikova: Is there a possibility to be a good tester, but not do exploratory testing at all? My feeling is that exploration is a kind of essential virtue of a tester.
  11. @tharindraj: In our project, test cases are necessary as the Automation engineer needs them to refer when writing automation scripts. How can we implement exploratory testing in a culture like this?
  12. Shalini: What are the main outcomes after exploratory testing and debreifing? Does it only result in notes to do (bug or fix) on the task board? Do you store and go back to your charters later? Do they become regression tests? Do you create any test matrix or statistics from your testing? if so How?
  13. @vukanovic: should exploratory testing be done by the tester with the most business knowledge or actually not?
  14. @tharindraj: How do we keep the Knowledge and do knowledge transfer while doing exploratory testing? assume the tester left and a new tester comes? Since we do not maintain scripts and steps, how can the new tester learn things?
  15. @danielbilling: How do you utilise or avoid biases when you are using your life experience as a heuristic/oracle?
  16. Shalini: can you show me an example of one of your test charter and test notes?
4 Likes

Whenever I start at a new organization I set out a contract between myself and the team to bring them on the journey of ā€œletā€™s use exploratory testingā€. This is something Iā€™ve discussed in my blog post Why Iā€™m talking to developers about exploratory testing. Explaining what exploratory testing is (and isnā€™t) helps people to be convinced as to its use and usefulness.

I also look to answer the following points:

Why should we do this?

  • Show the value and explain whatā€™s in it for them (useful information earlier & the ability to collaborate and own design).
  • Set expectations and show that youā€™ll be working with the team to pull in the same direction (not against them).
  • Talk about how itā€™s a plannable style of testing and not just ad hoc or unplanned.

We donā€™t need testing

  • Talk about working together and how this is not test vs. dev. Weā€™re not trying to prove them wrong, weā€™re trying to support them and help in their design thinking.
  • Point out that weā€™re providing useful information that can help them and weā€™ll tailor that information to be useful to them.
  • mention that unit and confirmatory testing can only tell us about what we already know. What about what we donā€™t know?

Wonā€™t it slow us down?

  • Talk about how exploratory testing can be planned and timeboxed.
  • Talk about using questioning and critical thinking to shift testing earlier (left) and save time by building the right thing initially.
  • Weā€™re still going to do automated regression, thatā€™ll speed us up!

What do we devs get out of it?

  • INFORMATION!
  • Weā€™re here to help, we can give you product knowledge or help debug issues with our skills and knowledge.
  • Talk about how having us about means they can cognitively offload some analysis onto us, leaving them freer to think about implementation.
2 Likes

I always start by showing people that exploration is something they already do. I ask ā€œWhen you get a new Phone / Game / App what do you do? Read the manual first or play with it to see what it does?ā€ If you use the product to learn about it, thatā€™s exploratory testing!

Iā€™ve run a number of practical workshops in the past where I work with people to show the basics of exploratory testing:

  • What is exploratory testing (finding and sharing information).
  • How to come up with test ideas by breaking an item into components, risks and tests.
  • Writing test charters using Exploreā€¦ Withā€¦ to Discover syntax.
  • Running test sessions and taking notes (I do this practically as an ensemble so we can all work together).
  • Working together to understand what a debrief is and how we can use our test notes to share information (good, bad and questions).

I also pair with people one on one to teach skills, show how to keep on track and share any heuristics (rules of thumb) that I use.

1 Like

The simple answer for this is that a lot of the time I donā€™t until Iā€™ve started looking. Because testing aims to find useful information for a team the amount of time that testing could take could be any amount of time whilst information is still useful.

In the real world we have deadlines and have to support our teams getting things ā€œdoneā€ so I time box my testing. I will write test charters based on the risks I think are related to the feature (based on breaking it into components and then risks and tests) and will work with the team to prioritise and size these tests. Based on the priority and sizing I can say ā€œOkay I plan to take this long for this feature, unless anything major comes up that we need to look into.ā€

So really it comes down to working with the team to work out whatā€™s interesting and useful for us to test and feed back any information that might require us to change these timings.

1 Like

Yes I do! Break it down, just like you would for a scenario or a test case. When we think of too big of a system, we get bogged down in the e2e flow and canā€™t see what to test, so lets break it sown to make it easier.

Product > Feature > Components > Risks > Test ideas

How would you test a chair?
This is a standard question that we testers ask each other and our teams and we tend to find that we get about 5 or so answers before the ideas run out:

  • I sit on it
  • Iā€™d tip it over
  • Iā€™d stand on it
  • Iā€™d kick it

It can be hard to design tests without techniques and skills to help us to come up with lots of ideas quickly. One way we can do this is by thinking about the components of a product, rather than the whole product as one complete entity.

Break it down
Instead of thinking about the whole thing, we can break the product or feature into its component parts and concepts to help us come up with test ideas. Now youā€™re not testing a ā€œchairā€ youā€™re testing ā€œlegsā€, ā€œcolourā€, ā€œdesignā€, ā€œarmsā€, ā€œseatā€, ā€œmaterialā€, etcā€¦

From there we can look at the risks for each component and how they might not work.

Risks then lead to test ideas as we look to see how the product (or component) works in relation to that risk.

(Those images come from a blog post That I wrote for Bloom & Wild in 2020.)

2 Likes

Exploratory testing is using critical analysis and thinking to uncover and share information about a product, design or idea. That means unlike in scripted testing we donā€™t need a product to be able to start testing. We can start exploring ideas or designs as soon as we hear about them, questioning them to find out information thatā€™ll shape how theyā€™re thought about, designed and implemented.

Responsibility lies ultimately with the team, the team wants better quality of their products so theyā€™re in charge of making sure that testing happens. As a tester in the team you might end up doing the testing (because you have the bests skills for it) and youā€™ll certainly champion quality, but the whole team is accountable for quality.

Note: That doesnā€™t mean only testers should be doing exploratory testing. I try to teach my team mates how to do it through pairing so that everyone can start finding more information about quality.

Tooling will depend upon what youā€™re testing but fundamentally you need anything thatā€™ll help you explore and find information about the product and a way of taking test notes.

Test Notes: Google docs, Excel, XMind, TestBuddy, Screen recorders, Audacity audio, Pen & paper, PicPick screenshot grabbers.

Networking: Charles proxy, Dev Tools, Clumsy network throttling.

APIs: Dev tools, Postman, SOAP UI.

UI: Colour marketing psychology, Accessibility tooling (WCAG 2.1, Axe, WAVE, Lighthouse, A11y)

Back end: SQL readers, IDEs, GitHub.

1 Like

Hereā€™s some screenshots of mindmaps that Iā€™ve made in the past (I donā€™t have many examples because my work ones are not for public consumption :sweat_smile: ).

here are some tests notes that show that Iā€™ve captured details about the test session, a key to easily colour code the information and Iā€™ve pulled out some issues. The notes themselves run left to right top to bottom as I explore the system. I mark them with colours to make it easy for me to call out questions and issues when Iā€™m debriefing and sharing the information.


1 Like

Exploratory testing is a testing technique that we use to interact with and learn about a product, design or idea with the express intention of sharing information about it. itā€™s different from other testing techniques in that:

  • We are looking to learn new things about the item under test, not just confirm what we already know.
  • It is flexible because we can use our own intuition to investigate opportunities that come up within a defined ringfence, rather than sticking to a script.
  • Our reporting focuses on sharing knowledge rather than stats and figures (what does ā€œ75.765% of my tests have been runā€ really mean in terms of how good something is anyway?)

In my notes I tend to capture:

  • The test charter, what Iā€™m testing.
  • Whoā€™s done the testing and when (for traceability)
  • What Iā€™ve done, so this is a narrative of what Iā€™ve tried and what Iā€™ve seen
  • Any issues Iā€™ve found
  • Any questions I come up with during my testing
  • New testing ideas

The idea of the notes is to have something that 1) I can use to share information about what Iā€™ve looked at back to people 2) can be used to learn about how the product works for future training.


2 Likes

Iā€™d say that exploratory testing creates more documentation and test notes than other forms of testing. If you have a specific need and know youā€™ll be handing over a project to others then make your test notes more descriptive and verbose.

I also run debriefs and demos to share what Iā€™ve learned about the product too.

@Barry E Iā€™d be keen to understand more about the view that exploratory testing has a tendency for minimal documentation.

1 Like

Of course you can be! Different people have different skills in testing based on the needs of their organisations. just because your organisation isnā€™t in a place where exploratory testing is needed (or can be utilised) that doesnā€™t make you a bad tester at all.

The same testing skills we use in all forms of testing (critical analysis, learning, inquisitiveness, asking questions, coming up with ideas, fearlessness, note taking, sharing information) are present across so many types of testing. Those are the things that make you a good tester, not the specifics of your technique implementation.

(However knowing exploratory testing will certainly help your information finding massively and will make you more marketable in the job market.)

1 Like

Exploratory testing helps us find out information about what we donā€™t know. Once weā€™ve run our exploration we have information that can be shared back to our automation capability to create scripted tests.

  1. Run exploratory charters to learn information about the product.
  2. Debrief the team to agree what we need to continue behaving in that way.
  3. Share that agreement & test notes to automation engineers (or write those scripted tests).

And remember that weā€™re not automating everything! Be selective about what needs to be automated from your exploratory testing results.

2 Likes

The outputs of exploratory testing is information that I share back to the team via my test notes and debriefs. From our debrief we may create fixes, bugs or new tests back on the Jira board, this is based on an agreement with the person Iā€™m debriefing.

I store charters against Jira tickets for traceability and you can use something like Test Rail or Zephyr to help with that. Just write your charter as the the scenario and attach your test notes back to the scenario 9which is more than good enough to meet ISO standards of auditability).

Anything we learn from testing via exploratory testing can become a regression test in the same way any other test would.

  1. We do the testing to find information
  2. Through a debrief we agree what weā€™d like to persist behaviour of (and what needs a regression test)
  3. We create an automated test script based on what we know about the system (directly or sharing back to the SDIT thatā€™s writing the automation)

I donā€™t create matrices or reports / stats for my exploratory testing because in my organisation thereā€™s no call for it. Instead I provide demos and debriefs about the quality of the system and also share information at stand ups about how good what weā€™re building is.

ā€œIā€™ve looked at this endpoint, looking at all of the CRUD behaviours with different valid and invalid data types, including some naughty strings and 4 bit character inputs. Itā€™s looking good, the responses are showing we persist the data in the database and the error handling for fields X, Y and Z are awesome with really meaningful error responses. Today I can find out more information about the structure of the JSON queries we send if we thing we need it?ā€

rather than

ā€œ59.00087% of tests have been run.ā€

In the first instance the team knows whatā€™s been covered and what else I might look into (and how good things are). they see that Iā€™m engaged and trying to help them out and am working with them. In the second instance when I just report a number Iā€™m not really saying anything about the quality of the system, Iā€™m basically just telling people Iā€™ve done something.

If people need a type of reporting at your organisation then Iā€™d recommend creating a document like the below that will allow people to see what testing has been done, details of the quality (through bugs) and an indication of whatā€™s left to run and how long thatā€™ll take.

2 Likes

A good question. Exploratory testing is the learning of our system or product to share information about it. Because weā€™re not confirming behaviour, but instead learning about and playing back about what happens, we donā€™t need to really know about the system in advance.

Business knowledge can help to identify issues more easily, and shorten feedback time to seeing issues, but you can always debrief with a developer or product owner to see where business issues might arise.

Donā€™t fall into the trap of thinking that Exploratory testing is like an ad hoc smoke test, just unscripted regression to check what we already think should happen. If weā€™re confirming what we already know then weā€™re not really exploring the system and probably donā€™t need to use exploratory testing techniques for that testing; try scripted testing instead.

1 Like

As a part of exploratory testing we should be creating test notes and sharing our findings about the system through demos, debriefs and the notes themselves. Regression tests will also be documented through our automation tests that we create.

If youā€™re in an environment where thereā€™s a lot of new testers (or any team mates) joining your project then use that as an opportunity to make your test notes more verbose, so that they can be used for training.

Donā€™t fall into the trap of thinking that exploratory testing is chaotic, ad-hoc and not documented. We can plan our tests and charters with a scope and purpose, timebox them, prioritise them and create useful test notes for sharing the information we find.

1 Like

I think inherently using your own life experience for something will involve bias. Asking ā€œwill I like this?ā€ is leaning into your own bias completely. The trick is to know this, be humble and not use this as the only heuristic for your testing. Learn about other heuristics, share your findings with your team mates to see what they think, learn about your biases and also have empathy for your customers that arenā€™t like you.

Itā€™s a little harder to think about what other people might want / like from a product, but itā€™s totally doable. The more you learn about other people then the easier thatā€™ll be too. Why not look into How Diversity and Inclusion can help improve testing?

1 Like

The recording is now live too for anyone who wants to watch it back :clap:

3 Likes