You have one hour to test a new feature, what do you do?

The MoT Twitter account posted this on Twitter:

And there are so many excellent answers. Such as:

Set myself a mission, I already have my time box, Explore, take notes and see how far I get! I know it sounds scary as a question, but also many small features can have great Exploratory coverage in a one hour session!

Ask what testing has been done already, what monitoring and tagging is in place, ask what the high risk areas are and what’s expected from the session and report. Prioritise pair-testing (with dev) the high risk areas, and areas where monitoring and tagging has been implemented.

Grab at least one of the developers who worked on it, brainstorm test charters together (time limit of max 10 minutes) and write them down. Decide on which charter to explore first and pair-explore it. 5 minutes before the end write up a summary with bugs and open charters

How about you, what would you do?


Depends heavily on the test object, but generally something along the lines you mention. Since I always strive to get a good contact with ALL developers in the unit I work for, I normally get up to speed pretty quick.

One hour to test a new Feature is kind of the dream job.


I’d try to find the person (or persons) who knows the most about this feature and ask them to give me a quick overview, after that I’d proceed with exploratory testing.

1 Like

Assuming that I have access to a test environment with the feature installed I would use some quick tours. Start with the Feature Tour to just map out what seem to be the main features and uses of the product (this will mainly help me establish the main revenue stream of the product to help prioritise what is important). Then hop on the Claims tour to see what the product claims that it can do (as suggested we could talk to people to hear their claims about the product, but in an hour there is no time for that). Then take the Data Tour as to me this is typically the best tour for deep diving. If time allows it the next in line is Cancellation Tour (which I do not think I will get to in an hour). If possible I would like to do this together with a developer so they can observe and understand the bugs so I don’t have to spend time exploring and documenting that.

As a bonus the Variability and the Configuration tour are also good candidates depending on the product and the feature.


My main question would be “Do you have a Happy Path workflow that the majority of users will follow?” From that it would follow that those functionalities on the Happy Path are the ones I’d look at first because any bugs there would be the ones users will be most likely to encounter. They are the ones with the highest risk of reputational damage to the product.

If there was time, I’d ask the developers where they thought there was the greatest risk of bugs or other problems and look there.


Bring the team together, tell them the feature that needs testing and tell them to prove its not broken in 30 minutes. If they don’t have access to environments, get them to pair up with people that do. I’d trust them to find anything and the 30 minutes would allow for collaborating with dev and finish up with a final run through…laced with a mass of assumption about the size of the feature :wink:


This is fairly normal for me, pretty much every day.

I caveat the following with the assumption I can install and can actually test so no time lost on that.

I often have a timezone lapse between myself and the developer but fairly standard for them to give me a two liner on the intent of the feature, but if they are available I’ll grab a couple of mins of their time and get an idea of anything I should look out for including potential regression risk.

Experience is often the main guide, rarely I’ll come across something new that I’d think ‘what on earth is this?’, they also tend to be end user apps so there’s generally and intuitive feature aspect that I automatically test for.

Knowing intent and likely having experience of something similar I tend to jump right in, first look tour with a traffic window open for any red traffic flags.

Happy path, edge cases, notes on real time risks to look at if time, second device sanity check, regression.

Out of this I’ll often have maybe ten points I’d need a developers comment on, ranging from
Clear bug
I found this odd
Noticed a couple of anomalies
Questions I need more info on
Potential risks I’d like a developer opinion on.
Things I liked.
Further testing recommendation if required.

There are a lot more detailed risk charters in my head but as I do them naturally real time often these are not written down.

I like one hour to test sessions, a lot.


Thanks for sharing, @andrewkelly2555.

Likewise, I think time constraints for exploratory testing sessions are an incredibly powerful tool.

I like how a time box encourages someone to stop, gather notes and debrief as soon as possible. Collaborate at that point on what risks to explore next. Define the next exploratory goal and run another session.

The rapid feedback/decision/action loop via a timebox is super powerful.


If there is only 1 hour to test a feature, then my aim would be to provide feedback as soon as possible.

First, I’d make a coffee - important to remain awake and alert.

Next, I’d create a quick list that identifies:

  • What I know about the feature (if anything)
  • What I don’t know about the feature
  • How I’d expect the feature to work

I’d then do some exploratory testing of the feature, with the aim and confirming the feature works the way I expect and discovering what I don’t know about the feature.


I’m assuming this is a piece of software/website/app (I’ll use “software” from now on) that I’ve been testing and am generally familiar with, and that a feature has been added and tossed to me to examine.

As an exploratory tester, the first thing I’d do is grab a mug of tea because the next hour will be frantic!

I’d start to explore by simply using the feature as I believe the end user would. If it’s obvious how to use it, then fine; but if not, we have a potential UX issue. Is it just me? To make sure, I’d make a note that we may need some beta testers to give honest feedback about the affordance of the feature.

As work through the feature, I’ll be making notes about all the options and alternative paths I can take, any input fields, if there’s a breadcrumb trail, any confirmations that pop up, etc. Unless something is obviously wrong, I’m not looking to find problems at this point; merely mapping the territory and noting what it contains. I just want to know what it is supposed to do.

I also want to know about the logical states the feature seems to embody, what triggers the transitions between states, and how they flow together. I also want to know what conceptual entities are apparent, and what attributes they contain. This is all to check consistency later, and also to see if states and entities can be manipulated during reasonable use into becoming in some way unreasonable.

I also need to know if the feature can be used as a guest, or if the user must be logged in. These are important meta states. Is the feature different for guests and logged in users? Should it be? Can I only do things as a guest that are logical for a guest to do, and vice versa?

Assuming all is well, I will now revisit the feature from scratch armed with what I know. I’ll use the feature again, but initially, I’ll seek to explore how it knits into the rest of the software to see if there are any problems there, such as data format or validation issues.

Next, I’ll check basic input field validations and so on, just to see if the feature catches them and if anything obvious pops out. I’ll also methodically click all the links to find dead ones, or ones that go to the wrong place. People find all this stuff tedious, but I find it exciting to be the first to discover a 404. It’s all surface stuff, but it only takes a few minutes.

I’ll want to see what happens if I do unexpected things that the user might reasonably do because they’re a user. I might click the browser’s back button or the device’s back button, for example. You’d be surprised how many pieces of software I’ve tested that can’t handle simple yet unexpected state transitions like that – especially during lengthy searches or database updates over bad connections where the user becomes impatient!

If there’s a breadcrumb trail, I’ll methodically click elements of it at different stages of the flow through the feature to jump to potentially unexpected places. Where do they take me, and what’s the functional result? Does one jump re-run a database search, but with the wrong or no data leading to an error? Do the cookies become corrupt leading to an error? Does functionality get retriggered inappropriately? (I recently tested a site where the happy path saw customers having a necessary base system automatically added to their cart, and then adding options to it, but jumping back to the wrong place in the breadcrumbs added a second base system to the order, and a third, and so on)

Let’s assume everything looks OK, the feature is basically robust, and does what it says on the tin.

It’s time to be a complete A-hole. Field validation triggers when I click away from a field, but what if I tab away? What if I use the browser’s back and forward buttons, discover the field is still filled when I return, and then click another field? Is validation still triggered orhave I got invalid data into that field? What if I copy and paste invalid data? What if I must input my name and insist it is Jon O’Malley? Can the validation handle my proud Irish heritage? What if I don’t have a mobile phone? What if I prefer you to contact me on a landline at work, with the wrong number of digits or some invalid characters involved because there’s an extension number?

What if I double click a button by mistake? (I recently locked myself out of a travel site by doing that on a search button – wall-to-wall application errors ensued instead of the homepage until I cleared cookies!)

Whoops! Time to pick the kids up! What if I suddenly close the software while in the feature and try to come back to it later? Oh no, I’ve gone into a tunnel and lost connectivity! What happens when I get it back?

And so on and so forth until the clock runs out.

Hopefully you can see the way I’d approach this as an exploratory tester is that every shift in emphasis is a little mini charter, covering a logical aspect of the feature. Every little series of tests either passes, exposes a “That’s odd” moment to explore further, or highlights a problem to report. Done this way, you can organise things to make sure you get maximum coverage in the time allotted with minimum time lost to planning.

This is obviously a hypothetical answer to a hypothetical question, but it felt good to get it down on paper. I think a t-shirt is called for: “I’m not crazy. I’m an exploratory tester!”


You have one hour to test a new feature, what do you do?

Say loudly “BRING COFFEE!!