Discussion: Test Strategies

Later today @testerfromleic hosts an excellent discussion on Test Strategies. He’ll be joined by @jesper , Barry Ehigiator, and @annee.

They’ll be an excellent set of questions and typically we won’t always get time for them to be answered. However, we’ll add any questions we didn’t get to here for the discussion panellists to answer. Plus I’ll share links to all the things shared. :ninja_blue:

And if you’d like to continue the conversation, this thread is an excellent place to do that. Share resources and follow up with success stories from your learnings! :chart_with_upwards_trend:

A recording of the discussion will be available for all Pro members. Look out for it on the Discussion Page in the coming days. :movie_camera:


Questions answered

  • What’s the goal of a test strategy and what does it enable?
  • When you are about to create a test strategy for a new feature/project, do you use a checklist, template or any kind of re-usable artefacts to make sure you didn’t forget anything?
  • What would a bad test strategy look like? As I find using a bad example and avoiding that, helps me do a good one.
  • What would you say are the main differences between a test plan and a test strategy?
  • How to motivate product teams or project teams to create a test strategy?
  • What are some of the questions which you ask while thinking about a strategy and what are examples of strategies that you have chosen and why?
  • You have a perfect (if there is such a thing) strategy, then the project changes, then the scope changes, then you discover things. How do you cope, how do you manage? How do you react when you realise you didn’t consider something impactful e.g. security, accessibility, performance?
  • I don’t have a lot of experience in software testing and now I’m at the totally new project where software development on the stage of making different schemes. What can I do for a team as a manual tester in the paradigm of test strategy? Where should I start?
  • Could you post the links to the visual test strategies mentioned? @jesper to kindly add this :slight_smile:
  • In a scale agile setting: how to establish aligned autonomy with test policy / test strategy on scrum team vs. Program / release train level?

Items shared during the discussion


Unanswered questions

  1. How valid is it to change the test strategy during a project? Is that normal and valid in agile methodology?
  2. Do you often end up not doing the actual testing in the end 100% based on the initial strategy and will deviate quite a bit from that? If so, do you go back to also modify the strategy as well based on what you ended up doing or is it more of a throw-away document in the end?
  3. So I am working on a project that is 3 layered…I am only testing tbe backend which is Salesforce.The web and middleware are looked after by another team.If I want to write a test strategy,would I include things for the web and middleware even though I am not responsible for the testing.
  4. What is the strategy to manage test cases that already noted while exploring testing? should I use test cases management tools or microsoft excell? is there any test case management tool that recomended?

Visualizing the pipeline, as described by @lisa.crispin & @ahunsberger, can help you find all the places from idea to deploy, where the story can be tested. I think this model could work even for non-DevOps deliveries too – testing can add value everywhere and there’s more to testing than gatekeeping.

You can read more about it in Chapter 2 - What’s In Your Pipeline? on Test Automation University.

I have an idea of another similarly imperfect approach to modeling a test strategy. more about that later :wink:


yes! In an agile setting preferably after a retrospective. When the strategy is part of contract work it’s more tricky - but still much required. eg. if the delivery approach is being changed, so should the test approach.

In my opinion, the strategy is the guidance for something, not the practical details. Usually, the guidance sticks and thus is more of a throw-away in the specific delivery. Often the guidance on eg. risk-based approach can be repurposed for the next delivery.

Relationships like that matter a lot. As well do integration points broadly speaking. Stating what you have control over (and sometimes not control over) is a key observation.

A great test strategy would indeed include the balance of what is tested exploratory, and what is not. Preferably both are represented in some tool, along with all other tests: automation, scripted etc.
Recommended tools are often discussed under #tools for instance here: