Do Exploratory tests need verdicts?

Hi all, first post on the MoT Club here, so be gentle!

So, I’ve recently been working in a Assurance role, testing against risks. It’s quite new to me as my job is normally in verification.
I’ve found myself doing a lot of exploratory testing and using a lot of techniques from the Rapid Software Testing course (and I’m loving it).

But, my question is, what do people think about reporting a verdict for exploratory tests?
We all know people loooove verdicts, “How many tests passed, how many failed?!”.

I’m finding it works when the testing went well, i.e. this session is a PASS, no problems found and all the risks were controlled well enough.
But, when problems were found slapping FAIL over a whole exploratory session doesn’t really fit.

My manager is actually very good at sitting with me and going through my session sheets after the testing. But when it comes to a test summary report, I feel I need a verdict.

I’m toying with the idea of using “Report” as a verdict? Meaning, there’s something was found here/something to take note of.

What does everyone think, anyone else has the same problem/thoughts?

3 Likes

First of all, welcome!

I think you’re absoultely correct in regarding pass/fail with suspicion. It’s a very neat way of summing up a great deal of complexity, but woefully insufficient and it’s easy to abuse or accidentally mislead people.

You’re also in good company, as Mr Bolton has written a series of blog posts on pass/fail and the testing story (which you may remember from RST) beginning here: http://www.developsense.com/blog/2012/02/why-pass-vs-fail-rates-are-unethical/

That will hopefully elucidate your concerns with pass/fail and a reporting mechanism.

The main point of what you do is to provide useful information to the right people so that they can make informed decisions. So if all the people who need informing about bugs, coverage, progress, whatever they need, are getting that information and believing it and understanding it and able to act on it then I think you’re doing a great job.

If you’re feeling like exploration has no definitive end, and that your stopping point is somewhat arbitrary, and nothing really seems to get finished then worry not, because that is normal. Testing is infinite, sometimes depressingly so, but completing an exploratory session can feel like at least the artifacts have a start and an end. I like putting my start and end time on my notes, to give them a sense of finality and help me spend my time wisely. I also summarise the bugs (problems in the product), and issues (problems in the project, including any questions) at the end of a session, which helps me to see usefulness in my work. I also find it helpful to reflect on what I learned about the product. This gives me a sense of achievement and progress towards good-enough.

Hope some of that was helpful! There’s a lot in RST about reporting and the testing story. I also like this: http://www.satisfice.com/articles/how_much.shtml.

7 Likes

Wow, thanks a lot for pointing me at this blog Chris!
There’s too much gold on Mr Bolton’s site, it’s hard to dig out all the gems sometimes!

The idea of producing test output in a newspaper style is fantastic.
I’ve not had the time before now to research into producing testing stories, but the second part of the article you linked http://www.developsense.com/blog/2012/02/braiding-the-stories/ really helps with that.

2 Likes

interesting. You have to agree with the developer, but not the customer/PO?
There are so many ways to go about, so if that’s a good fit for your context - run with it. Good point re “no single person has a decision point”

Is that not putting the developers in the position of decision point?
As ultimately you will get to a point where your feedback is happy to release and then sounds like they review and sign off on your work approving it to go Live?

That sounds like a great way to work, but do you mean that the dev/test team work together to ensure that from both their views this matches with the user story from the product owner (as they will have final say prior to release?). Or are we still mainly talking about exploratory testing so this is more at a prototype stage where the requirements are less defined?

1 Like

I record my observations against the story in question and discuss them with the team when I stop exploratory testing.

We then decide what to do about each observation, if we’ve got stories in the backlog that cover it or if we can live with the observed behaviour or if it stops us from delivering the story and needs to be addressed.

1 Like

Hi Stuart,

Welcome to The Club!

It seems that you’ve come across a similar issue I had when using SBTM reports for the first time. It bothered me that there wasn’t any kind of “conclusion” section where I could note any insight derived from the session, and I had a lot of questions around whether that would be left solely for the debrief.

You can read about my experiences with this here: http://www.cassandrahl.com/blog/chatbot-testing-with-sbtm-and-mike-talks/

As of yet, I haven’t been able to try out these reports any further, or experienced a “real” debrief, so I don’t have any answers on this yet… I appreciate that might not be all that helpful, but hopefully you’ll feel a bit better knowing that you’re not the only one who has wondered about this :slight_smile:

Good luck moving forward and let us know what you find / decide to do.

Thanks,

Cassandra

1 Like

Great topic, thanks for sharing.

I have used Pass/Fail for Exploratory Test sessions before but to state “there’s a problem here” rather than for a metric.

I’ll have a read of Michael’s blogs on the topic as he’s always got great insight to share.

1 Like

Hi Stuart,

I definitely agree that Pass or Fail don’t work with exploratory testing. It’s so far from being that cut and dry.

In my test notes I tend to end up with the key information being presented in three categories: Bugs, Issues & Questions. When it comes down to it, these represent the information required to make a decision.

That said, if people really want a more measured output, perhaps consider a rating system. Every session/story/requirement is ended with a score out of 5 or 10.

If it all falls apart, it’s a 1/5. However if everything is amazing and flawless then it’s 5/5. It’s more flexible than Pass vs Fail, but it’s much better for at-a-glance consideration. It’s also a concept that everyone is familiar with.

1 Like

That’s exactly what I’ve done so far.

Luckily I’m being asked to produce a presentation on what I did and what I’ve learned (as well as a standard test summary report).
So this is going to give me the opportunity to summarise things in a qualitative way, instead of purely quantitative.
I’m hoping it goes down well, so that I can try and make this style of reporting the norm, maybe move towards the Newspaper style reports Michael Bolton talks about.

1 Like