Hello everybody!! :man-raising-hand: I have the following question: We have developed some APIs that are going to be used by some external clients to retrieve information about clients. We have defined some performance requirements for these APIs that we have tested internally with Taurus and all the tests were āPassedā. I got some questions from the product owner regarding the fact that we have run all these APIs/performance tests internally. So then I am wondering:
1: Shall I expect big differences in the response times of the calls if the tests are run externally? If yes, which factors are provoking this?
2: Is it part of our responsability to ask to these external clients to run tests against our APIs to check the performance?
I am a bit confused so any help will be appreciated, thanks and have a brilliant day!
-
This depends on your definition of āsignificantā. A client calling your API from another part of the globe might incur on average (very roughly, YMMV) another 100-200ms of latency, which is enough to matter in some contexts, but not others. The latency variance will also be a lot higher over the internet, so itās worth considering what the impact of much higher tail latencies measured in seconds.
-
I would suggest itās usually the clientās responsibility to monitor the latency of your API from their clientās perspective if this is important to them. If it isnāt, probably not much point in asking them to do it. However, Iād also recommend capturing latency metrics from your API on the server side.
One other factor that might matter - did you do your internal testing against HTTP or HTTPS?
HTTPS can be quite a large overhead, particularly when you have internet round-trips to consider, so itās usually a good idea to test with this turned on.
Yes we are using HTTPS thanks for the advise, I will look into it
@tomakehurst When you say ācapturing latency metrics from your API on the server sideā You mean what we are doing now, running tests where everything happens behind our firewall and the calls are coming from inside the house?
Iād recommend measuring latency (and throughput + error rate) in at least performance testing and live environments, via your monitoring system. This means that a) you can use it to determine the health of your live system, b) observe the serverās perspective of performance when load testing - sometimes the load tool and monitoring will tell you different stories and this can be very useful in finding problem root causes.
Nearly all modern monitoring tools have a way to fairly easily get at these metrics from HTTP servers, so assuming you have a tool in place it shouldnāt be too hard to enable.
You want to create a realistic performance test so yea try running it from another network.
Imagine you are on crappy Wi-Fi (free wifi at a cafĆ© for example) or you are at home browsing on a 1gb/mbps line. I think youāll see the difference already. Itās not all about measuring āyour internal networkā but you want to see how your app behaves when itās on a ālesserā network. Is your application going to be available like a regular website? Try testing through 3G / 4G also.
For the rest, tomakehurst comment sums it up!