How do you prioritise API tests?

Before you start working on API tests, it is important to decide why do you invest your time and efforts and what do you plan to test or automate. Reading about API testing, I’ve come across those points several times (one example).

Can you share your thought on how you prioritise your API tests? Feel free to talk about exploratory or automated tests. Where do you start? What factors do you consider? How do you decide that your API tests add value? It would be great if you could provide examples or some kind of context.

I have very minimal experience with API testing and I tend to overcomplicate things, so hopefully, your response would be helpful to guide me on the path to creating API tests in a most effective way .
Thank you! :slight_smile:


Is the API part of the product I’m testing?
Is the API a responsibility of the team I’m into or other teams’?
Is the API internal to the company or external?
What role am I in at the time? a tester, a developer, analyst, product manager…
What resources do I have available? do I have access to the API, how much, when, where? how can I check it?
How much time I have to test?
How often is the API updated, by whom, do I know when and what gets updated?
Do I have access to the code of the API? developers that implement it?
Is the API used, or will it ever be used?
What’s the API used for? What’s the risks that could trigger by using it?
What perspective do I have for the API? consumer or provider?
Who, what, how do other services/products need to use the API? does it provide(adjust) the data, and make accessible the functions that are mandatory for the product to be minimally integrated?
What’s the API type, how small it is?
What state is it in? planning, brainstorming, diagram, specification, coding?
Where does the API fit in the product context? How many clients are using it? What’s the SLA for it?
Who’s the owner of it, what does the owner want/need in regards to the information obtained from testing?
What is my mission? A mission is agreed with a product owner usually or your manager on what information about the product is needed for them to be confident in a branch merge, or a release or an integration or certification or performance or availability or localization or something else…
And on and on it goes the list of questions you should find answers to…

Other references(several posts): Exploratory Testing on an API? (Part 1) « Developsense Blog
Do this first and then once you are confident on the things that comprise the API context and you are aware of stable outcomes, you can automate checks as well if time allows and they are worth the effort.


It depends if you are testing a newly build API or already build API.
When you are building a new API it’s easy but when you get dropped into an application which already has tons of API’s. I focus on the Happy Flow & the most common business flow.

After that I tend to go for write operations POST/PUT/PATCH, since there is (most of the time) a lot to test compared to a GET and validate data.

What also is important and I’m glad you liked the test pyramid is to have a chit chat with your developers about what they test so you don’t test the same thing twice.

This might be an interesting topic for you @m.zn : What do you assert on when doing API testing? - #5 by meowy24

  • Response Body
  • Response Time
  • HTTP Status codes (200,201,403,…)
  • Response Headers (security headers, Content Type, etc…)
  • Cookies (if any are present)
  • Cache
  • Authorization & Authentication
  • JWT Tokens

In all cases happy flows & unhappy flows.


I don’t see how your suggestions could work in a generic way.

Examples of scenarios where I was supposed to test APIs(being the only tester):

  • I detected a bug in a product, which was caused by an API(handled in another dev. team); the API needed a change and I was tasked with testing the change - I had 2 days to do it.
  • Business requested a feature ready in 3 weeks. The API was developed in the first, the UI in the second - integrated and bug-fixed both UI/API and released in the third. Had to do overtime by the last few days - a presentation to the C-Level people was supposed to happen on the next day after the release;
  • An external company provides production updates of their APIs on Fridays; they introduce bugs from time to time; it’s up to ‘us’ to detect them in production;
  • A business feature is requested and prioritized, an API integration capability with another interface/API investigation is starting. A POC is then developed in 3 months. The API is continuing to be built and rebuilt over 6 more months together with a backend API which both get adapted constantly;
  • Due to the high amount of concurrent logging and user requests to the APIs some servers crash from time to time; some developers are building or enhancing existing APIs with logs from time to time, to help with testing; is the Production system safe or when will it fail?
  • Another team has changed a shared API, they need to release the change in a few days for product ‘B’; is our product or service ‘A’ compatible with the change?
  • An integrated external payment system API returns irregular ‘unknown error’ exception. Are there problems with the payments of the clients or the messages or the integrated flows - test it?
  • A new API that needs to provide static content for the application is being built(it will gather data from another API and hand it over to another API connected to a frontend). Test if it’s possible to meet the business demands of the data when building the API in the middle; have the information ready before starting to implement anything;
  • A product release needs to be available soon; test if we need the new API version to be released before or at the same time(instead of just after) to avoid potential feature failures?
  • Conditional API paths; A product gets static content from 2 different sources, based on product type; The API does data parsing separately; the product has 2 different UI designs as well; and 3 different flows/client-types; identify incompatibilities of the data and design across all combinations;

Now imagine several of these coming over to me at once in a single month. And add on top of them, 20 other feature branches waiting to be tested in the product, that are not APIs.

1 Like

First thing first, I prioritize according to the most value for the money aka biggest bang for the bucks. Then I have a rule of thumb that if an interface is designed for a human it tends to be more efficient to test it as a human and if it designed for a computer a computer will be more efficient. Meaning that an API should be more efficient to test with a computer than a human. Since it is specifically a contract for another piece of software to interact with. Speaking of which, my preferred option is contract testing if possible. What is contract testing?

Every thing else is one of those big it depends, so more guidelines. Strive towards letting the developers do the updates of the tests at the same time as the do the changes in the code. Keep the tests connected to the code to facilitate this, keep the test setup lean so the run time is fast for the developers to benefit from these tests, since they will then have an incentive to keep them updated. If you despite all this still find yourself in the position that you as a tester needs to test the API, I would strongly suggest to resist the urge to do a fancy automation disconnected from the code. Quick and dirty for the win, according to rule number one. For instance just running curl with a plain text document on different queries copied and pasted and a folder with some payloads is a very efficient way to test an API.


Well everything is context depending. If you are a solo tester or have a full team, what business you run etc etc… It’s just my experience for the projects that I’ve been in.


I suspect some of the deeper explanations are that API testing is often not what it says in the tin. Not all bugs found in API test suites are defects in the API, but rather defects in the integrations the API may be wrapping. Stress testing and business logic breaks and changes that @ipstefan is talking about are frustrating, but are not what API testing is best at catching on it’s own. Hence the need to prioritize, but more importantly to understand the system moving part impacts.


A while back there was a question about API testing on Twitter. This tweet and replies contain useful information:

I mentioned the heuristic POISED, which is used by Amber Race in a free course on Test Automation University.


Thank you everyone for replies!