What is the key element you include when building a test strategy?

Thinking about Huib Schoot’s TestBash Detroit training The Magic of Sherlock Holmes: Test Strategy In The Blink Of An Eye! - made me think about what is the one thing you consider when building your test strategy. So I asked on Twitter and LinkedIn and got some great answers:

Collaboration :fist_right: :fist_left:

What problems we are solving? Why we are solving? How we are solving? Communication is the key so important people understand.

Team availability

Long term reusability of the test suite and time for exploratory testing.

Risk, team capacity, team skillset, testers domain knowledge and technical skills and finally timeline and other concurrent projects if exist.

Use cases

A clearly defined goal, regularly reviewed to focus all the teams actions and energy to fulfill.


It’s easy to say customer flows, and it is vital (after all, who else are we building for) but as time goes on? I’d say keeping the performance in check is absolutely essential.

Flexibility, collaboration and innovation. Customise your strategy. Throw away the rulebook and mindmap your unknowns. Be prepared to shift your original mindset.

exit criteria - otherwise when do we know we are finished

What would you add to the list? What do you consider when planning your test strategy?

1 Like

Great questions, @hellofrommot!

I always include unit testing as a part of the Testing Strategy.


Testers level of autonomy and reach of QA.

  • The level of autonomy means what can a tester do alone. I love to give single testers more autonomy than others would for the reason of both of us learning. Testers are creative people, if he is going to fail, he will fail in an interesting way and both of us can learn from that.
  • The reach is a must if your project uses other projects within the organisation. There need to be set boundaries or lack of them. It also helps narrow down your focus, because you can call part of the project tested by the other team and just check integration.

NOTE: Many already claimed, you should still test “everything”. Yes, if you don’t have trust in people you work with, but this is a special scenario for another, more managerial, topic.

EDIT1: By “everything” in NOTE’s “test everything” I mean test to some degree every module, library, etc. you are using. Which again, is not your job, but the ones who provide the library.


In the long-term, doesn’t matter if you are if you are continuously improving if the people who are competing for your user’s attention are improving faster.

I never do. Or more precisely, I pay lip-service to unit testing. In my test strategy documents, I usually say, “Yep, we do unit testing. Look in the design specification for details.”

In my most previous test strategy, this was expanded to, “When outsourcing our coding tasks, make sure that they do unit testing. Otherwise my life would be more difficult.”

(Which I suppose is the difference, most of the software I test these days is outsourced, so unit tests are not part of our domain)

Oh dear, I often have this grand plan that requires buying hardware that will help us get coverage or more accuracy, and then it never materializes. Or sometimes turns into a dinosaur. My biggest failure is to not scope out enough time to do post-release testing at the one moment in time that bug injection rate and fuzz has calmed down.

One key thing I am doing well at, lately, is delegating the testing to the developers in an effort to stop them from adding code before release. That way I get the testing story points added to sprint as well.

Hello @brian_seg!

I agree ownership is a consideration.

To provide some context, I work with teams where the products are developed and tested in-house. By having unit testing as a part of the Test Strategy, I set expectations with the project team and I support our enterprise goals towards CI/CD.

Where product development is outsourced, there is a much greater challenge to expect good and documented tests and test results. That challenge includes methods of collaboration, time zone differences, coordination of development, and supporting infrastructure.
Even when those challenges are addressed, a unit test designed by a developer may not be as powerful or valuable as one designed by a tester; this usually adds time to the project in the form of reviews and clarifications.


It’s not possible to test “everything” unless you’re dealing with a simple single-path application. Most modern (and most legacy) software is built in a way that allows unlimited movement through the various modules. It’s usually possible to test each module. It may be feasible within the time constraints to test each module with each configuration setting. It’s not possible to cover each possible pathway through the system.

The analogy I use is a small town. It’s relatively easy to work out a way to get from any point in the town to any other point in the town. But someone could loop around one or more blocks any number of times for whatever reason they choose. They stop to refuel their vehicle. They detour by the store to pick up something to eat or drink. They decide to use the scenic route and take three times as long as you’d expect because they’re looking at the view.

Users can and will do the equivalent thing with software. The number of possible routes through any non-trivial application is effectively infinite because users can cancel out of an action and come back later to do it properly (and keep canceling and returning depending on how many times they get interrupted).

“Test everything” is a lie. Do you test that every printer on the market prints your page correctly, or do you test that the print preview shows what you expect? I know I do the latter unless there’s a reason that I need to send the page to the printer.

p.s. Apologies for the rant. It’s not aimed at you, but at those who say “test everything” and mean it.


I don’t mind the rant. I mind the fact that it seems I have expressed my self badly. What I meant by test everything is test every module you use. Sometimes you have even heroes who ask you if you have tested some 3-rd party library (e.g. typesafe.config) you are using. That is what I meant.

I will correct myself above. Thank you.

1 Like

Oh, that makes a lot more sense.

Sometimes you have to test third-party libraries or interfaces, just so you can confirm that your software handles them correctly. Payment processing handlers are a pretty good example of this: often the processor will require you to certify with them so there’s a record that your application is one of their clients (and so there’s a defined process to handle returns).

That said, if you don’t need to test a module, I don’t see any reason why you should. Making that decision means knowing whether or not any of the code in that module has changed - which isn’t always as clear as it might seem.

A few of my jobs have involved hardware. Testing everything changes it’s scope a little.

And you really do have to have a clear chat about what you don’t test, but also have to do a lot more negative testing (non destructive) than in other interfaces because hardware can create a security risk, so those get more coverage often. It does involve a lot of more communication than you think to get good coverage even if it looks simple, like hardware often does.

1 Like

Interesting. How did you get buy-in from the business to devote testing activities to the development team? Do you already have a good working relationship with management prior to shifting testing upstream? I find this topic very applicable given that every team I’ve been on has very few testing resources and always seen as a bottleneck.