API performance mapping

Hey

I’m interested to hear if the MoT community has any tips or insights on API performance mapping in a microservice environment.

I’d like to know about;

  • Approaches to performance measuring when you have one or more services talking to each other.
  • Mocking in performance tests and how they fit in.
  • What data would you like to have, in order to make performance related decisions to your automation framework.
  • Timeouts and retries. General discussion point.
  • General ‘rules of thumb’ with API performance testing.

… and any K6 / jMeter / Loadrunner / Gatling success stories a bonus. By this I mean, meaningful changes being implemented as a result of showcasing poor performance using these tools. Your stories welcome!

7 Likes

Hi Sam,
The first and second questions strongly depend on the nature of the services.
For the third, I will not provide exact measurements but some ideas on how to define what you need. Divide it into three parts:
a) resource usage and overloads. Exact measurements depend on services. This will provide information to benchmark and overload data.
Try to get as much information as possible about infra architecture. Not all measurements are available on all systems.
b) Cross services communications. Except for communication issues (in case it is applicable in your architecture), it also could be an indicator of issues with different services.
c) Your solution-related measurements. For example, if it is an updates management service, how many updates are processed per minute? This will require a good understanding of your solution architecture. My advice is to work with an architect on it.

I worked with both Loadruner and JMeter and personally, I prefer JMeter. It could be easily integrated with an automation system, and it is very robust. From another side, my expirance with Loadruner is quite off the date.

Good luck

1 Like