Requirements Coverage on Tests

Hi All,

Big debate in our test team last week around at which point in Test Design you can “link” Requirements to a Test?

The advice we were getting from our Test Manager was - “You cant link Reqms to a test until youve written the Test Steps on the test, and only then can you claim the Reqm is covered” ie you cant link Reqms until youve fully completed the test design process for each and every test.

I disagreed & said that Reqms can be linked at the start of Test Design once youve completed Test Analysis, as based on test analysis youve considered which tests youll design based on Reqms from the test basis.

Therefore once I know Requirements being considerded I can progess into creating test cases & test objectives which will cover the reqms & then progress with test design including writing the steps that will verify the reqms & expected results.

If test case lifecycle is broken down as Information, Activity & Results, linking Reqms, in my opinion, is part of the Information element in the lifecyle of a test case.

I dont think either approach really matters, but to be told a test has to be completed in full before I can link Reqms doesnt seem correct.

Thoughts please …

9 Likes

Perhaps it would help to look at the issue from a broader perspective. As someone who has written many requirements documents, I learned from testers’ comments that a requirement that can’t be tested can’t be verified. So maybe a requirement is an objective or a principle of design. If it can’t be measured objectively, it won’t have test cases. Thus, testable requirements have a link to potential test cases.

So the “big debate” sounds to me like whether to link the requirement to a chicken (a clucking test case) or to an egg (the potential for one). I’d suggest the link changes during the life cycle to meet the needs of the project. And the process should allow for evolving detail and knowledge.

In your case, what’s at stake here? Does anyone’s work change by where and when the link is established? Is this a reporting issue with political visibility? Is this debate worth the time?

5 Likes

I agree with what @hattori says - I’d suggest that you link requirements to test designs when that’s all that’s available, and then maybe link to test steps as well / instead when they become available.

But I would also echo what @hattori says about testability and also ability to link to test cases (which isn’t necessarily the same thing). How do you test a requirement such as “It must be easy to use”?

Also, do you have standing requirements, that are just there in the background all the time, that apply to all bits of work? For instance, that UIs follow style guidelines and are accessible etc. Do you need to test for that in new work? If so, does that mean there need to be test cases? If so, what are they linked to? I.e. just as there might be requirements that have no tests linked to them, are there tests not linked to requirements?

I guess the point I’m driving at is: why is there an explicit link between requirements and tests? Is it because it’s helpful, or because a report will show up red if number X < number Y? I think that that’s the more important question that should be addressed first, and then the details of how requirements and tests are linked might become clearer. E.g. how is most helpful to link requirements and tests? (How can most value be generated from that, for the least cost?)

4 Likes

It was exactly conversations like this that pushed me at work to devise a “Requirements Testability Assessment” process which would mean the test team would take each requirement and identify whether they could create at least one high level scenario which satisfied the requirement. This then gave an early indication of how many of the requirements could initially be mapped to tests and also if there were requirements which needed more work.

It sounds more formal than it is, but it does require early communication between Testers/BAs and others and can help ensure there is a solid mapping early on between requirements and tests

5 Likes

Non-functional requirements are the ones that have totally thrown me as well. You have to decide whether to not automate with those requirements in mind, or in principle to design tests based on a risk matrix instead and just ditch poorly written requirements docs.

I’m working on a new product right now, and the requirements are really hard to design tests for, mainly because of rapid scope changes (and not being familiar with the tech stack.) I am taking an approach of making iterative passes of the requirements, but testing (manual and automation) what is ready to test now as priority. Because queries raised on requirements take too long to beat out, it’s often easier to test what you have in front of you. And then step back every so often.

3 Likes

Some other great points already made.

I agree with the sentiment of understanding why you are linking requirements, and making sure you are satisfying that need.

My approach on test analysis and design general speaking, is to iterate. I use the steps something like:

Explore, Capture, Review, Update

At any point in the cycle if I think it’s useful to made or break a link, I’ll do so.

It can be useful to know, if you are producing test cases, if there are unlinked requirements or requirements with many tests.

As a metric, it shouldn’t be used alone or in isolation. In context it can help you understand areas that might need further analysis or that have had the lions share of the focus.

4 Likes

I think your test manager is wrong, you must know the reason you’re writing the test, you don’t write them for no reason. Every test proves something. At some point you should have read the requirements and had a chance to discuss them, decide if they’re even testable.

From this you’re writing test to cover a requirement. So, you’re correct and it’s how I would do it.

I also very much work from my head, I write a bunch of high level scenarios based on the requirements. It’s unlikely now that I’d write steps, I’d never follow then line for line.

If I find that I haven’t covered a requirement then I’ll add that, if I’m testing and fully think of something else, I’ll add that.

It’s a whole lot easier if your requirements have acceptance criteria, then you can write to those, as they define what you’re proving with the tests.

I guess the manager’s way of writing is very old school, fully document everything, I’d prefer to just use the software and find out about it. At previous jobs where I was with the Devs I’d just explore, compare to what it should do and document that. This isn’t what you asked though.

4 Likes