In our company at the moment we are trying to “formalise” the way we do our Exploratory testing. I was wondering if peeps here have advice about how to show how this testing is being done- so anyone can view a sprint and see “ahh ohh Brian has done this, that…” to the back log under test . We are using TFS and I was thinking of using tasks. I am moving away from the having to do rigid test cases and getting into the spirit of notes and outcomes. ( if need be these can be used by resident automation dudes to turn into regression) My hope that this will turn the focus and time on actually testing rather than worrying about plodding through writing a test case, which has its own overhead and cost. So any examples of how your team is recording the Exploratory Testing so it has good visibility and traceability would be appreciated.
My hope is that at some point I can create matrix of bugs found during exploratory as well.
Howdy! I’ve never used TFS, let’s just get that out of the way.
If your goal is to expose the contents of testing then you should have a system to attach notes to whatever you use to track tickets/stories. I used to write my notes in a template I built in OneNote then export them to PDF and attach them to JIRA stories. This was the minimum I could get away with to fulfil our requirements and minimise friction to my testing. Anyone who needed to see the sessions I completed and what happened in them could read those PDF files.
This, to my mind, covers the points I think you asked for:
How testing is being done
This does depends on the content and quality of tester notes. They need to know how to tell the testing story (how is the product, how do we know that, how is the testing going). If you need them to provide certain things in a story, such as a charter or environment or timestamp, then I suggest you formalise and enforce that with a rule or a template. Nobody likes paperwork and nobody likes anyone who likes paperwork.
The most compelling system for formalisation of low-formality testing I’ve seen is Session-Based Test Management, I think invented by Jon Bach.
Don’t forget that ET (like all testing) is exploratory. That means that what you’ll do later is based on what you do now. That means that you cannot predict what you might feel that you need to do until you find out that you need to do it. That means that you can’t write down a list of tasks to do that you know to be complete. You have to balance your resources (logistics; time, energy, focus, money, environments, etc.) with what you need to create (test products; notes, reports, etc.) to fulfil your test strategy (test strategy; test strategy).
This goes against the idea of formalisation which generally requires an explicit structure to be defined up front. Consider what you’re trying to achieve with formalisation. To ensure coverage of certain areas or quality criteria? To track testing time? To provide evidence at audit? To support tester debriefs? The answer to this question will help you to decide what you want to have written down, therefore what tools will support your needs. I’m sure your teams have opinions on what formalisation they require, also.
Formalisation can be the enemy of free exploration. As you allude to in your question the idea is to follow ideas, not write them all down. Writing things down is a high cost in terms of time, attention and emotion, and should therefore have a significant benefit to balance it out.
Thanks for this! I have attended a web QA with James Bach a couple of weeks ago, very entertaining it was to. I have to be careful of using words like “Formalise”- what i was getting that as a QA/Testing team we just want able show them best way to do Exploratory testing and driving out the faults/bugs/general improvements while still have good way of tracking these efforts . I am getting more focussed about how to make my job more productive, and less painful with antiquated mind sets of “oh have to do an umpteen step Test case” which you then have to find some poor soul in the team to read and check and bore to death. Then said TC will just end up some dusty repository and never be used again. I know there is not one boot to fit one foot, but I am hoping to tailor basic aspects to my job and the team I work in. I have been umpteem years at this and still figureing it out ha!
My tester who does exploratory testing writes a test report. It is just a simple format with a couple of things that needs to be filled in per issue like issue number, places tested, a (short) description of what he has done and found feedback. That way I know if he went to the right places and if he covered enough in exploratory testing. Nothing really fancy or time consuming, but for me just enough information to know what is done.
For testing I use tasks. If there’s a large or complex set of tests that will need to happen, I’ll use test cases through the web interface, although if I use the test steps, they’re more like lists of things to check at various places in the test.
As an example, my test case might be named “Successful online purchase of Item x”.
The description would have supporting information such as
“any payment method is valid”
“inventory integrity must be maintained”
“User must be logged on”
Test steps could include things like:
After adding to cart, check that qty is moved to pending status
If qty = available qty, check that other purchases can’t be made
After purchase completes, check that pending qty is cleared and available qty is correct
Generally my goal is to have enough information to be able to repeat the process if I need to. If I suspect that there could be potential issues, I’ll use the TFS exploratory test session tool (either using MS Test Manager or the Firefox or Chrome extension for the web) to capture a session for closer analysis.
My own preference is to be as low-fidelity as possible. I like Chris’s approach of attaching notes on an as-needed basis, but in general I prefer to not include a huge amount of data. It just becomes unused, unread, low-value cruft. At least in my experience.
Generally I’ve worked with my teams to write out charters, either SBTM-style or the more terse bits from Elisabeth Hendrickson’s ExploreIt. If the testers feel it pertinent, notes/reports/comments from those explorations get saved as tasks or notes on the User Story/Task/whatever. (I’ve bounced between four or five different tracking systems at clients over the last several years. Hopefully you get the idea.)
Bugs that do get filed (as opposed to ones we discuss and fix right away and therefore don’t file bug reports on) get flagged as to where/how they were found. Again, depending on context that might simply be a tag or metadata tying back to Exploratory testing, or it may be a link/metadata back to the charter and its reports/notes.
Since the context of what it actually trying the be achieved by “formalizing” the test it’s hard to come with to concrete tips. But in my experience there are two agendas that are common in this area. Report / understand coverage. As in what has been covered by the testing and tracing. As in how was this tested. I would keep TFS or any such test management tools super light weight if the purpose is coverage. And as Kinofrost says attaching notes either digital or analog works. Currently I am experimenting with a Rocketbook notebook. Which is like a note book but you can erase the pages (with a wet cloth) and then scan the page and upload it a cloud location.
SBTM also aims to solve both.