There’s no definitive answer to this question of course.
The assertion can be made more interesting if you carefully think about what you put in. I find PUT and POST much more interesting to test because then I can think about what inputs I can feed the API that might lead to problematic results. This I do based on risk and with the use of the always/never heuristic (What should always work and never fail).
For example, I was testing an API call that should or should not give a certain result based on a timestamp field in the body (POST or PUT). This was really important logic so I created some tests that played around with the timestamp field. The tests where I knew the expected result were automated and then I did some more exploratory testing with the API to try and find surprising results.
But. The biggest issues I have with API testing and why I do not frequently get involved is that the consumer and the language bindings are just as important as the data fields to assert upon. Far too often I overhear complaints where 200, 403, 404, 405 and more that fall into a bucket of poorly communicated result-code issues at the wrong layer. Doing this incorrectly in a language binding is a source of great mirth to your users.
Get result codes out of the way first in small suite of tests.
I’m normally designing a tool that attests all expected members in the result exist,
And additionally assert for no new members and fields arriving unannounced
Tooling to prune fields that are not part of the contract and does test exact matches to an expected result blob
Tool up to test the api in any STATE or CRUD combinations as appropriate to the API (some API’s are purely readonly) as a simple exploration of handing session state.
In general I would say that Status Codes and Response body should mostly cover all.
Some extra check that I try are the below
Check response time
Check request frequency if there is one - I made an API a while ago and limited the GET enpoint to 3 calls per minute per user.
3 . I also check on response data format ( json, xml, string )
Also a check that I did last week in one of my videos is to check that the reponse body matches a predefined schema.
Is this another way of asking what is a good “rest api testing strategy” ? If yes, then you can see the link which I have shared below. I read the whole article and saw that it has a lot of ideas on what to test and some templates too to get you started. BTW, I googled those words in double quotes and saw that this was the first link.
@heather_reid, Even I had the same question when I started with API Testing (one and a half years ago )
So based on my personal experience I would say the below points:
Start with the response codes along with the status
Response Headers
Schema, if needed you can even validate the subset of the response instead of checking the full response
Parsing the response to validate the values against the expected (not only values, you can test the properties/keys exists or not)
Tests for checking any hardcoded values in the response and treated as critical
Whether the chaining of requests is feasible and the operation is working as expected
As part of the negative tests, ensure if the authentication fails when provided wrong credentials (of course, make sure not run many times if there’s some probability of account lock after number of failure attempts )
Curious how everyone has gravitated to WEB api testing, which depending on the language bindings may not even be the api the customer sees. The status codes are a domain problem which as we can see from the replies everyone knows are a flakey area of API as a product specifications. Why is that we all know status codes always get abused? They should not even be part of the API.
I often use APIs to test services, especially as many of them don’t have UIs. I’m this case, addition to asserting against response I’ll check the wider system state.
For example:
Database was updated as expected
Payload sent to an onward service as expected (I often replace this onward service with a Test Receiver for automation.)
Lambda was triggered
Is this strictly API testing, system testing or microservices testing?
This is an interesting point. When I’m doing strict API testing, I usually push for a system I can essentially test via black-box testing, just making RESTful requests (meaning that I push the devs to expose create and reads, and updates and deletes if necessary).
As Ben points out though, especially for broad e2e testing, you’ll often need to move beyond black-box and check persistence layers, downstream workflows, etc.
Good discussions. Wanted to point out some stuff if not already mentioned. These might not be specific to API (response) validation but relevant around the API testing flow.
checking for redirect flows when API URL redirects across 1+ API endpoints behind the scenes. A good example is OAuth and login authentication based APIs
HTTP to HTTPS and vice versa redirection in API call (hit the server via HTTP, it auto redirects to HTTPS for same endpoint URL path, or the other way around HTTPS to HTTP) - it not redirect, you should get error if the API isn’t served over both HTTP and HTTPS.
latency of API response is within spec, whether API server is under load or not