API testing/ Performance

Hello everybody!! :man-raising-hand: I have the following question: We have developed some APIs that are going to be used by some external clients to retrieve information about clients. We have defined some performance requirements for these APIs that we have tested internally with Taurus and all the tests were ā€œPassedā€. I got some questions from the product owner regarding the fact that we have run all these APIs/performance tests internally. So then I am wondering:
1: Shall I expect big differences in the response times of the calls if the tests are run externally? If yes, which factors are provoking this?
2: Is it part of our responsability to ask to these external clients to run tests against our APIs to check the performance?
I am a bit confused so any help will be appreciated, thanks and have a brilliant day!

  1. This depends on your definition of ā€œsignificantā€. A client calling your API from another part of the globe might incur on average (very roughly, YMMV) another 100-200ms of latency, which is enough to matter in some contexts, but not others. The latency variance will also be a lot higher over the internet, so itā€™s worth considering what the impact of much higher tail latencies measured in seconds.

  2. I would suggest itā€™s usually the clientā€™s responsibility to monitor the latency of your API from their clientā€™s perspective if this is important to them. If it isnā€™t, probably not much point in asking them to do it. However, Iā€™d also recommend capturing latency metrics from your API on the server side.

One other factor that might matter - did you do your internal testing against HTTP or HTTPS?

HTTPS can be quite a large overhead, particularly when you have internet round-trips to consider, so itā€™s usually a good idea to test with this turned on.

2 Likes

Yes we are using HTTPS thanks for the advise, I will look into it :slight_smile:
@tomakehurst When you say ā€œcapturing latency metrics from your API on the server sideā€ You mean what we are doing now, running tests where everything happens behind our firewall and the calls are coming from inside the house?

Iā€™d recommend measuring latency (and throughput + error rate) in at least performance testing and live environments, via your monitoring system. This means that a) you can use it to determine the health of your live system, b) observe the serverā€™s perspective of performance when load testing - sometimes the load tool and monitoring will tell you different stories and this can be very useful in finding problem root causes.

Nearly all modern monitoring tools have a way to fairly easily get at these metrics from HTTP servers, so assuming you have a tool in place it shouldnā€™t be too hard to enable.

You want to create a realistic performance test so yea try running it from another network.

Imagine you are on crappy Wi-Fi (free wifi at a cafĆ© for example) or you are at home browsing on a 1gb/mbps line. I think youā€™ll see the difference already. Itā€™s not all about measuring ā€˜your internal networkā€™ but you want to see how your app behaves when itā€™s on a ā€˜lesserā€™ network. Is your application going to be available like a regular website? Try testing through 3G / 4G also.

For the rest, tomakehurst comment sums it up!

2 Likes