APIs can timeout or return exceptions i.e. HTTP 5XX statuses like 500, 503, 504. Should a QE test these scenarios? If yes, then how can we test it? So far, I have never seen ways to simulate the conditions which would cause http 5xx to be returned. I feel that such scenarios should be covered by unit tests of the developers. I think it is not worth the effort to test such things.
Example - Say that my API1 needs an external API2 to do some work. API1 can return a 5xx code when it has a problem or when API2 has a problem. When there are no ways to simulate either scenario, then how do I test those scenarios? Wouldn’t that require the developers to add some testability features to simulate those scenarios? Is it really worth the trouble?
Adding the test ability of those scenario’s is worth it. (in some cases I suppose)
Are your applications unstable or do you reply on other teams to develop the other API2? Then yea I would definitely do it. For example 503. Service Unavailable. With an appropriate message. So you can test if the message is good.
Have you looked at API mocking tools like WireMock (which I maintain), MockServer, Hoverfly etc.?
In the scenario you described you could configure WireMock to simulate API 2, then set specific stubs to do things like return 503, drop the connection or add a long delay before before responding. This would allow you to test the effect of these failures on API1’s behaviour.
@tomakehurst - Thanks. To implement your suggested test approach, I’d guess that we should be able to choose whether API1 talks to API2 or a mock of API2, i.e. we should make API1 configurable. Then, we can test such scenarios easily and quickly. However, if the API1 cannot be configured at all or cannot be configured easily, then can we still use mocking?
As an aside, what can we do in the initial stages of API development to enable mocking?
@fullsnacktester - If an API is in initial stages of development, then what are the features we can suggest to make it easy to test timeouts & exceptions?
Regarding writing small web servers to return desired responses, are there any tools which generate such servers easily instead of writing/coding them? If not, could you please suggest any tutorials on how to write such web servers, say in Java?
If possible, could you please share one situation in which a unit test would not be able to find an issue which mocking would? I am wondering if the unit test could be made better or if the testing is better done with mocking/higher level tests.
@anon68517856 yes, to do this you’d need some way to tell API1 to where to send its API2 calls.
In practice there are three ways you can achieve this:
Modify API1’s configuration to change the hostname/base URL for API2 (or create an additional environment configuration that does this).
Change the proxy settings for API1, either directly in its configuration or via its host. Sometimes this approach is helpful when you can’t modify the app but you can tweak the machine it’s running on.
Manipulate the DNS service provided to API1 so that the original domain name of API2 is remapped to your mock server. The difficulty of this varies depending on your environment, but quite straightforward if you’re running in Docker.
An additional wrinkle for the 2nd and 3rd option is SSL. If you mess with DNS or proxy settings and API1 is calling API2 over HTTPS, then you’ll need to find a way to make API1 trust a different SSL certificate.
Once you’ve figured out which of these strategies to follow, then next step is to start identifying which API calls are made by API1, then creating stubs for them with realistic happy-path responses. Then you can expand on this with more stubs for error cases as we discussed.
Late to the party here… As a dev, I’ve implemented testing features for stuff like this a few times. It does requires a bit of dev work, but it’s a fairly small lift and unlocks a lot of testing possibilities.
Basically, what you can do is send special HTTP headers to simulate specific service states. (e.g. HTTP header X-TEST_HTTP_FAIL_MODE=503 would make the service return an HTTP 503 without processing the request).
Since most backend services have the concept of middleware (node-js example), it’s generally fairly easy for devs to implement something like this for you.
If you have a micro-services type of layout, you can use something like a service-name prefix X-SVC1_TEST_HTTP_FAIL_MODE=500 to select which service gives which failure mode.
With external vendors, you could ask your devs to mock it out with something like X-STRIPE_TEST_CARD_DECLINED=true. And actually, some vendors provide specific inputs that can be used to test error states (e.g. stripe error cards). You’d have to check vendor-specific docs to see if anything like that exists.
I know all these solutions require dev work, but I think this is a real opportunity to unlock failure modes that would continue to go untested. And it’s really not that much work for the devs to add something like this.
Well, I am too late to this discussion, still thought to post my findings.
As fullsnacktester suggested, API mocking is the right way to confidently simulate higher latencies and timeouts. We prefer to use tools like Beeceptor which is a combination of HTTP Proxy and API Mocks. Suppose that API1 talks to API2. In that case follow:
At first, we set up an HTTP Proxy configuration. You place Beeceptor in between these two so the call stack looks like. API1 ==> Beeceptor ==> API2.
Next, you start using the Beeceptor’s endpoint URL in the code instead of API2. Most of the services and developers allow you to make the microservices and 3P services Base URLs configurable via env variable or config files.
If such provision is not present you need to ask the developers to give you a way to override the service’s Base URL via configuration in the test environment. When the configuration is not present, it connects to default but gives you the ability to override. This is a one time code change, but giving SDETs full control over testing.
Next, send a request. The HTTP Proxy mode will let the call go as is, and your application works as before like nothing changed.
Now you define a mock rule to timeout the API call. Beeceptor’s mocking rules lets you do that with no-code, and instantly activated.
This process is called Service Virtualization in technical architecture language. You can check a detailed tutorial on how this is being setup using Beeceptor: Service Virtualization for SendGrid APIs | Beeceptor
This, what Nate said! I think majority of time QAs are going above and beyond, finding ways how to test a system when all it takes is a tweak on the actual production code - in this case an API.
This is the essence of calling a system testable. If you don’t architect a system to be testable you are gonna face issues like this in the future for sure.