Ask Me Anything: Test Strategies

That is something I would probably call a “Test Policy”. A Test Policy in my experience is a document in which, often big companies, try to explain what (the purpose of) testing is and what in their way of working should be adhered by teams. In my experience companies write it but teams never really look at it. It is often on such a general level that even in audits no deficiencies are found. I ask myself why do we need those documents if nobody really reads them or uses them? Also a thought that pops up in my mind is: do developers or managers have document in which they explain what the purpose of development or management is?

A Test Strategy is a solution to a complex problem: how do we meet the information needs of our team/project & stakeholders in the most efficient way possible? A Test Strategy is “a set of ideas that guide your test design or choice of tests to be performed”.

Rikard Edgren says: “Test strategy contains the ideas that guide your testing effort and deals with what to test, and how to do it. (Some people say test plan or test process, which is unfortunate…). It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

I cannot imagine that you want to define a Test Strategy on company level, unless you are talking about testing the whole landscape of applications and their interaction and flow in chain testing? In that case my Test Strategy would probably have the items I mentioned in question 1.

I think this topic was covered in the AMA. But let me still answer this, since this is an important topic and I like to add a few things. Involving people is often a matter of getting their interest. So talking in tester slang is not going to help you.

So do not say stuff like:

We’ve will create 17 test cases in the system test, we will automate 50% of those test cases for area C so we will have 30% code coverage. Last time in this area we found three major and five medium bugs, so I want to add 3 FTE’s to do orthogonal pairwise testing. To mitigate the risk of this calculation to fail, we should do equivalence class analysis, use self-verifying data and do the elementary comparison testing!

Also talking about stuff that is too general is boring because it will not add a lot of value. Talking in too much detail is also boring for many people because they want to talk about stuff that matters to them. So I suggest to talk to different stakeholder in different settings.

Alex Schladebeck and I did a keynote about how to talk about testing and quality called “Let’s stop talking about testing, let’s start thinking about value”. Check it out, although it is not recorded, we wrote a blogpost about it with the main points.

1 Like

First premise: let’s distinguish between a strategy and a strategy document. The specific nature of that difference seems to have been missing from all of this discussion so far. A test strategy (for anything, including for an API) is the set of ideas that guide your choice of tests. (A test strategy document is some document that represents these ideas to some degree. A strategy is not a document.)

Second premise: you have choices in testing; everything that you do in testing represents a choice to do some things, and choice not to do other things. By “choice of tests” here, we don’t mean “selecting test cases”; we mean the choices you make about risks you want to investigate, quality criteria you want to consider, techniques you can apply, aspects of the product that you want to cover, data that you might select…

Third premise: to guide is to influence, but not to determine or to dictate. Elements of your context will enable or constrain your choices; so will your skills and your mindset.

Fourth premise: as a heuristic, for any given X, “X testing” is “testing focused on risk related to X”.

Now: if you are even considering “API performance testing”, you already have a strategy; that is, you already have a set of ideas that might guide your choice of tests. That set of ideas may be vague or confused, or it may be sharp and rich and coherent, but you’ve got at least two notions in your head: “API” and “performance”.

The API is an interface. It’s a means of getting at functions in a product. So in one sense, you’re not testing the API; you’re testing the product, using the API. To test the product, you need models of the product. On the other hand, you can also think of the API as a product in itself (someone produced it). To test that product, you need models of that too.

“Performance” is a word that represents a category of ideas that people might value in a product; it’s a quality criterion. Performance might include notions of speed; responsiveness; capacity; reliability and robustness under load, or under stress, or under some kind of constraint.

The idea of interface as an element of the product and performance as a quality criterion can be found in the Heuristic Test Strategy Model (https://www.satisfice.com/download/heuristic-test-strategy-model), which provides us with a set of ideas about how to think of strategy generally.

That leads us to key questions that guide testing related to performance, via the API:

  • Do we know what the customer wants? How big a deal is performance as part of that? (Maybe it’s not a big deal, and we don’t have to do a ton of performance testing.) Do customers use the API? Which customers? Or is the API simply a means that developers use (and that I can use) for easy access to aspects of the product for which performance is critical? Have we included such things in the design of the product and of its API?

  • As the developers are building the product, are they building it with performance in mind? As they’re building it, is the performance of the product they think it is? What testing are the programmers doing? Are they reviewing the design and the code for performance-related issues? Do the low-level checks that they’re doing include some focus on timing, or on stress? Are there specific functions, accessible through the API, on which they are doing performance analysis? Are they well-documented?

  • What do we need to do to prepare to test for performance-related problems? What tools do we have? What tools might we need to develop? Is there logging built into the product? Do we need write code and use parts of the API for deeper analysis? Have we prepared representative environments and representative data to model real-world performance?

  • How might we explore or experiment with the product to examine performance-related risk? How might we stress the product to extremes? How might we starve the product of things that it needs?

It seems to me that examining and exploring these questions will help to guide you to develop a set of ideas for testing performance.

This might help too: https://www.developsense.com/blog/2018/07/exploratory-testing-on-an-api-part-1/

1 Like

I think Michael an important point here - these are the questions that weren’t addressed in the AMA last night. Some of the points you’ve mentioned were addressed and clarified last night. The video will be live later this week for watching.

Metrics are not the first thing I think about when thinking of a Test Strategy. A Test Strategy helps me guide my testing. Drive the choice of testing I do. How do metrics play a part in that?

I have a love/hate relation with metrics. This is because I think it is generally a good idea to have metrics to measure useful and meaningful thing that drive value. That help me make decisions about the way we work or the product we are working on. On the other hand, we should be really careful with metrics. There are many really bad metrics.

What problem are you trying to solve or what insight do you want to gain? Which data could help you make decisions? Try to find metrics that support that. Have a look at the links I provided on “meaningful and bad metrics” to get an idea what could help you.

I cannot imagine that anybody would have a fixed strategy, since we learn new things all the time. So I would expect the Strategy to change too. So yes, it develops. A Strategy starts small. From the start to the finish of a test project or testing a release or even a user story, we start small. How to develop (and decide about) a Strategy is described in question 1.

So a Test Strategy evolves, it grows while we learn & discover more about the product. Based on risks we decide what to test first. For those areas we have more details in the Test Strategy. We go from global to more details. Your Test Strategy is “complete” at the end of your testing.

This is the same as asking “how may organisations or team within an organisation vary?”. A good Test Strategy is product specific so it definitely will vary in different teams/organisations.

Absolutely. And if the project goes as planned, please do so too! The is no way I can imagine where we create the complete Strategy upfront and we stick to it until the end. Also look at my answer for question 9. I think a Test Strategy should evolve from small and general to big and detailed over the curse of the “test project”.

For me a Test Strategy is an evolutionary (mental) model that changes all the time. When I learn new things, when I get new insights, when I see parts of it aren’t working as planned, when I get feedback that the ideas I have aren’t effective or my way of working is inefficient. I share my thoughts and (parts of) my test strategy constantly with many people I talk to or work with. They help me sharpen my ideas.

Why do you want to explain the value? Why don’t you just show the value by example? Tell them about your Test Strategy and show them artefacts to support your story.

You always have a Test Strategy in your head. Maybe you write it down, maybe you don’t. So what is the problem we are talking about here? Do you need to convince them to spend time on creating a document which captures your Strategy? Or do they refuse to help you or spend time on it?

The value of any strategy lies in knowing what to do and how to collect the information needed. The clearer your mental models become, the better you will know what to test, why and how. The strategy will create insight in which information our stakeholders want (missions), what the risks are, what the status of the product is, what actions we need to take to mitigate the risk (test or something else).

A Test Strategy is a set of ideas that guide your testing. A Test Strategy document is something different. Have a look at what Michael Bolton said about that here.

Resources can be found on my blog. Have a look at my Great Resources page. You’ll find a section on Test Strategy with many links that will interest you.

This question was answered in the AMA session. You can find the recording here. The question is answered at 51 minutes and 20 seconds.

First let’s talk about the difference between a test plan and a test strategy. To me a test strategy is a set of ideas (not a document) to guide your testing. This is not a plan. A test plan is the set of ideas that guide your test project. A Test Plan is the sum of logistics (the set of ideas that guide your application of resources to fulfilling the test strategy) and the strategy. Michael Bolton wrote a blog post about this titled “What Should A Test Plan Contain?” almost 12 years ago.

What definition says that? I googled it and found several websites that made me sad. On those pages I read that a test strategy is a high level document. They talk about a strategy being generic requirements for testing or how to test on an organisational level. I am talking about something completely different here! Please remember what Rikard Edgren said in his fabulous tutorial "Test Strategy - Next Level. It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

I can not emphasize it enough, A Test Strategy is not a document! Whenever you test anything, you have a Strategy in your head. Most of your Strategy is in your head or in several heads spread over the team and will become clear when you discuss it. Writing things down, making mind maps or other visuals and models only helps you understand and communicate your Strategy better. The trick is to become aware of the Strategy in your head (mental model) and create insight and overview for yourself and your team and the stakeholders.

We have strategies for everything we do, even when we do not write them down or visualise them. Ask yourself, do I need to write it down? Why? To discuss it? To make it better? To communicate it? To report or for accountability purposes?

When you think of how to test a whole project or a separate user story, you think of different things but also about stuff that overlaps. I think it is a good idea to “split” your Strategy up and examine it from different angles: project, features, user stories, releases, etc. Using many perspectives and many models (see my answer for question 1 and question 2), will make your thinking better. Which does not mean it needs separate documents or in some cases even a document at all (a Strategy is not a document remember?).

See question 16.

Absolutely. See question 1.

Same question as question 8.

I am not sure what you are asking. What do you you mean with “a high-level test strategy for all testers?” And I am a bit confused by the second part: “specifically around the context of what you are testing?” Any test strategy is depending on context, so what kind of strategy are you thinking of not depending on any context? Maybe you are talking about a way of working?

I think of my testing strategy on different levels and from different perspectives all the time. See question 16. I can imagine that we think of testing on a high-level at the beginning of a project. Like performance is important, so we need to do performance testing at some time, so let’s make sure we have the right tooling and environment ready.

Any test strategy that stays high-level is not something I would value very much. A strategy needs give insight in WHAT you gonna test and HOW, which is pretty specific. I can imagine that when you start to work on a new project or new feature, it takes time to learn about the product. We have to deal with not knowing many details yet. This knowledge will grow and we will learn the necessary product details.

Interesting question. What do you mean by not working?

  • Not finding (enough) bugs?
  • Bugs found are not important enough?
  • Get the wrong information from testing?
  • Too expensive?
  • Not finished in time?
  • Doing too much testing?
  • Testing the wrong things
  • Using the wrong tools?
  • Not done according to company policies?
  • Environments are blocked because tests often crash the product?
  • Can’t be executed because a lack of skills?
  • Stakeholders are not happy with the test strategy?

Often only a part of the strategy is not working. Signs and symptoms depend on our missions (what do your clients, stakeholders or team expect from testing?). I think going through the list above might give you a few ideas.

Also, how do you answer the question: when do you stop testing? When is your product good enough?
James Bach wrote an blogpost “How Much is Enough? Testing as Story-Telling” and an article about “a framework for good enough testing”. Also Michael Bolton wrote an interesting blogpost about “When Do We Stop a Test?”.

I suggest to talk about testing and quality with your team and stakeholders. Do a retrospective every now and then to find out if there are improvement opportunities (which also can mean testing less). Often tell the three part testing story to your team and stakeholders and decide together if your test strategy is good enough. The three part testing story helps you gain insight in:

  1. the product story
  2. the testing story
  3. the story about the quality of testing

I cannot answer this question because I miss the context and I have no clue what kind of product we are talking about here.

Generally speaking I would learn about the product, think about were it might fail, think of ways to find those problems and think of ways to explore the product to find unknown risks. This is the same approach as I described in question 1:

  1. Missions for your testing
  2. Product analysis
  3. Oracles & information sources
  4. Quality characteristics
  5. Context: project environment
  6. Test strategies

I cannot answer this question because I miss the context and I have no clue what kind of product we are talking about here. See my answer for question 22.