How do you usually approach testing when the requirements are vague or incomplete?

From my experiences, it’s good to have/use intuition, but you need to ask team (if you work with cross-functional team) that involved in that project/product. Find, collect, document all information you have into documentation so it documented well and can be use not just you but all people/stakeholder. Having clear and detail knowledge, information, requirement.

1 Like

love this, how you broke down it is such a practical mindset. That bit about observing how software is not supported to work really stood out to me.
Spotting things like unhelpful error msgs or UI glitches often reveals more than expected. Thanks for sharing this.

Agree-intuition is great, but combining it with solid communication and documentation takes things to next level, i like how you emphasized gathering inputs from whole team and turning scattered info into shared knowledge, thanks.

1 Like

Can’t believe I missed this thread! Well we tested a POC development recently with that exact problem. It was a POC so no requirements or acceptance criteria , just a business brief of what type of product we wanted to trial to see if we could utilise the data we alrady had domain knowledge on.

So I searched the club for a techniques and found What tools and things do you use to help you with exploratory software testing? - #52 by thomjr . So rather than seeing it as testing, we saw it as a feedback loop and used the PQIP technique so that testers could use their intuition, but to give the product, design and dev teams insight. The whole test spent an hour reviewing the POC and documenting on confluence using a PQIP grid. That really worked.

Now if you have a productionised product feature your testing that has a lack of clear requirements, then I think PQIP would still work - just the feedback loops need to be quicker.

1 Like

I often prefer light requirements to detailed ones alongside questioning, discussion exploration to fill the gaps.

My main area of mobile apps suits this well with often a max of ten key flows and being intuitive is an important test area.

Bigger apps tend to for me need more requirements.

I’m flagging though that as my bias is towards this light requirement model I do occasionally miss some very basic detailed elements in requirement documents. I’ve learned to exchange speed of testing and finding important things can outweigh the detailed approach but as a consequence carries a risk that I might miss some basics. For example an exact specified warning paragraph my natural testing would pick up if the intent and usage were correct but I might miss exact character matching.

Very much in the same vein as @simon_tomes I say EXPLORE and find out what’s there so that you can present it back and say ā€œis this good enough?ā€

Why do I do it that way? Because people have more opinions in the real than the hypothetical… you have to show them what’s going on. Think about times when you ask people what they want to eat. They say ā€œohhhhh I’ll eat anythingā€ and as soon as you say ā€œLet’s get pizzaā€ suddenly they have an opinion and don’t want that.

It’s the same with uncertain behaviour. Coming with clear examples of ā€œis this goodā€ solicits better opinion making than when we ask ā€œwhat should this be?ā€ in the more abstract.

2 Likes

I lean into my previous experience with the product and how the new features matches to that.
I present my findings and discuss them with others so that we iteratively find out how good the product is and what needs to changed.

Crap. I described basic testing, agile and Inspect & Adapt.

Written requirements can be a helpful tool, but they are a just snapshot of the moment they had been written.
They can be fault right from the start and outdated fast, when nobody updates them.
Having a shared vision with the relevant people is key. Requirements are just artifacts of that.

And written requirements just reach for a certain depth. There are way more details which needs to discussed with people, than can be written.
Think of written requirements more as help to remember things you discussed more in details rather then hard boundaries.

There’s always an ā€œoldest person on the projectā€ who has knowledge on what happened and why it happened. You can use them as your first oracle.
Next up is the team’s chat tool and project management tool, if you dig into them smartly, you will get good information out of them.
Lastly a lot of the people here have said, asking questions! But one has to be careful so that the questioned person(s) do not find it ā€œnaggingā€
So it’s better to write down questions and assumptions and then discuss them altogether.

1 Like

What if that has become you? :flushed_face:

1 Like

Then you’re the oracle and must find he who is the chosen one. (Matrix reference) :sweat_smile:
And then maybe learn how to bake cookies too :joy:

2 Likes

Ok, even more vague. How do we estimate for the effort of testing something where there are no written requirements (other than a like-for-like replacement).
The functionality is known to the business team but not to the test team.
A list of high level functionality is provided but is very sparse! An estimate could be given based on lots of assumptions and caveats I feel but what do you guys think? What information would you need as a minimum?