What do you assert on when doing API testing?

Of course my question will depend on your situation but this question was sparked by someone asking on Slack:

Is there much value in asserting on anything else other than status codes and response body data?

Which got me thinking, what do you assert on when you’re doing API testing?


There’s no definitive answer to this question of course.

The assertion can be made more interesting if you carefully think about what you put in. I find PUT and POST much more interesting to test because then I can think about what inputs I can feed the API that might lead to problematic results. This I do based on risk and with the use of the always/never heuristic (What should always work and never fail).

For example, I was testing an API call that should or should not give a certain result based on a timestamp field in the body (POST or PUT). This was really important logic so I created some tests that played around with the timestamp field. The tests where I knew the expected result were automated and then I did some more exploratory testing with the API to try and find surprising results.


Initially, all returned data.

But. The biggest issues I have with API testing and why I do not frequently get involved is that the consumer and the language bindings are just as important as the data fields to assert upon. Far too often I overhear complaints where 200, 403, 404, 405 and more that fall into a bucket of poorly communicated result-code issues at the wrong layer. Doing this incorrectly in a language binding is a source of great mirth to your users.

  1. Get result codes out of the way first in small suite of tests.
  2. I’m normally designing a tool that attests all expected members in the result exist,
  3. And additionally assert for no new members and fields arriving unannounced
  4. Tooling to prune fields that are not part of the contract and does test exact matches to an expected result blob
  5. Tool up to test the api in any STATE or CRUD combinations as appropriate to the API (some API’s are purely readonly) as a simple exploration of handing session state.

In general I would say that Status Codes and Response body should mostly cover all.

Some extra check that I try are the below

  1. Check response time
  2. Check request frequency if there is one - I made an API a while ago and limited the GET enpoint to 3 calls per minute per user.
    3 . I also check on response data format ( json, xml, string :slight_smile: )
  3. Also a check that I did last week in one of my videos is to check that the reponse body matches a predefined schema.

Nice question @heather_reid

  • Status codes and response times are great things to assert on every request.
  • Check the response is coming back how you expect. you can assert the format (is it a string, object, array, null?)
  • if you’ve changed data with your POST or PUT/PATCH is that coming back in the response?
  • Checking validation. Does the response have error text you can validate and make assertions against?
  • Assert any headers that are being returned and they are in the correct format?
  • Assert a token is being returned?
  • If you supply an invalid verb for that request you can assert to response.

To purely answer the question for happy & unhappy flows I assert on :

  • Response Body
  • Response Time
  • HTTP Status codes (200,201,403,…)
  • Response Headers (security headers, Content Type, etc…)
  • Cookies (if any are present)

This is what you mean right? Not like changing the method, trying to force forbiddens, parameter tempering, leaving a required field empty etc…?


Is this another way of asking what is a good “rest api testing strategy” ? If yes, then you can see the link which I have shared below. I read the whole article and saw that it has a lot of ideas on what to test and some templates too to get you started. BTW, I googled those words in double quotes and saw that this was the first link.


Whenever I see a thread about API testing I’m reminded about this idea. What is contract testing and why should I try it?

If you are not doing a publicly available API but an internal one that is a very cool concept to help decide what you actually test for and assert on.


@heather_reid, Even I had the same question when I started with API Testing (one and a half years ago :grin:)

So based on my personal experience I would say the below points:

  1. Start with the response codes along with the status
  2. Response Headers
  3. Schema, if needed you can even validate the subset of the response instead of checking the full response
  4. Parsing the response to validate the values against the expected (not only values, you can test the properties/keys exists or not)
  5. Tests for checking any hardcoded values in the response and treated as critical
  6. Whether the chaining of requests is feasible and the operation is working as expected
  7. As part of the negative tests, ensure if the authentication fails when provided wrong credentials (of course, make sure not run many times if there’s some probability of account lock after number of failure attempts :stuck_out_tongue_winking_eye: )

Happie Testing :partying_face:


Curious how everyone has gravitated to WEB api testing, which depending on the language bindings may not even be the api the customer sees. The status codes are a domain problem which as we can see from the replies everyone knows are a flakey area of API as a product specifications. Why is that we all know status codes always get abused? They should not even be part of the API.

1 Like

I often use APIs to test services, especially as many of them don’t have UIs. I’m this case, addition to asserting against response I’ll check the wider system state.

For example:

  • Database was updated as expected
  • Payload sent to an onward service as expected (I often replace this onward service with a Test Receiver for automation.)
  • Lambda was triggered

Is this strictly API testing, system testing or microservices testing?

It’s probably a bit of all 3.


This is an interesting point. When I’m doing strict API testing, I usually push for a system I can essentially test via black-box testing, just making RESTful requests (meaning that I push the devs to expose create and reads, and updates and deletes if necessary).

As Ben points out though, especially for broad e2e testing, you’ll often need to move beyond black-box and check persistence layers, downstream workflows, etc.


For context, most of the services I test don’t have public APIs, but are using HTTP APIs as an interface and to interact with eachother.

I do also often work with Devs to provide additional endpoints and extra information being returned, but sometimes I just need to go direct.


Good discussions. Wanted to point out some stuff if not already mentioned. These might not be specific to API (response) validation but relevant around the API testing flow.

  • checking for redirect flows when API URL redirects across 1+ API endpoints behind the scenes. A good example is OAuth and login authentication based APIs

  • HTTP to HTTPS and vice versa redirection in API call (hit the server via HTTP, it auto redirects to HTTPS for same endpoint URL path, or the other way around HTTPS to HTTP) - it not redirect, you should get error if the API isn’t served over both HTTP and HTTPS.

  • latency of API response is within spec, whether API server is under load or not