Ministry of Testing launched a LinkedIn newsletter at the start of 2023. Each newsletter article has got a lot of attention which is awesome. Each one celebrates someone in the community who has produced something that’s been published on the MoT platform. Written by me or @sarah1 – based on our own interpretations and experiences.
So I thought, why not bring them onto The Club to spark conversations, share ideas, celebrate and debate? Here’s the first one that’s helpful for anyone new to software testing.
When writing a test/check for an API scenario it’s easy to create a negative test. For example, it’s important to test something that checks for a different response to a HTTP 200 OK success status response code. But where do you stop? How far down the rabbit hole of test scenarios do you go? What is a good enough number of permutations for each response code?
There’s an opportunity to avoid thinking in terms of coverage. Instead, here is the one question to ask when automating API tests.
What risks are these API tests mitigating?
What things might threaten the value of the API? How important are these risks and what impact might they have? Capture risks and use risks to guide essential permutations. Being deliberate and targeted goes a long way. There’s no need to feel obligated to automate every permutation to attain a mythical coverage metric such as test cases automated or code paths covered.
@mwinteringham shares more on this topic and includes a helpful example in his excellent article: Should You Create Automation For Each Negative API Scenario?
How about you, what one question would you ask when automating API tests?