Fixed request rate load testing

When you’re testing the performance of an API, what do you fix and what do you allow to load up unconstrained?

I recently reviewed some testing from a Dev on my team, where they had initially made some measurements of production workload, and observed 11 requests per second. Then, they designed a test with 11 users, with a 1 user per second spawn rate, and unconstrained requests per second.

The graphs clearly showed that the RPS was way higher than the production load, so does the data gathered help us, to know what the response is at ~130 RPS?

I’ve suggested running the tests again, with a 1 second wait time between requests, while this won’t give a totally fixed 11 RPS rate, it should give us a more realistic load.

I’m keen to know, what do you fix when you’re doing performance testing? What do you leave unconstrained?

2 Likes

This is a common mistake many people make in performance testing. “just put load on it maybe add a small wait between requests” and it’s a totally unrealistic scenario. Think like: How often do you fill in a form in 1 second? :stuck_out_tongue:

You basically want to make a realistic scenario. Meaning if a user is going to a page and has to fill in a certain amount of data. Let’s say I would do it in 15 seconds. You’d need to wait 15 seconds in your performance test for that user. BUT… not everyone is ME. So some people will fill it in faster and some will fill it in slower.

I tend to use a Gaussian Timer effect where I for example put the offset-timer on 5 seconds. Meaning; if I ran this for 10 users. Every user will wait between 15-5=10 AND 15+5=20 seconds.

This creates a realistic scenario and your rates will be way more production like, especially with more users. If there is a page with a ton of fields, some people from business might wait 60 seconds because it takes so long to fill in… then you HAVE to do the same. (with some offsets)

I hope this makes a bit of sense?


Example scenario:

GET API of Page 1
People look around a bit ~7sec with 2sec offset

‘Click on a button GET API of Page 2’
People look around a bit ~5sec with 2sec offset

‘Click on a button and you’ll have to fill in a form with 10 fields POST API page 3’
People look around and fill in the fields ~15sec with 5sec offset

‘Click on a button GET API of Page 4’
People try to validate the things they’ve submitted ~15sec with 3sec offset

4 Likes

Thanks, I think I need to study our production traffic in much more detail if I’m going to make a realistic simulation of traffic.

In my case, I know the 11 RPS to this one API endpoint doesn’t represent 11 user’s. We have way more users, but not everything they do will hit this one endpoint.

At some point I’ll design a more realistic system test, not just a targeted endpoint test.

1 Like

I don’t know how big your application is but make sure that if you say 100 users on production is an actual 100 users on endpoint. What I mean by that is, if you have a big application, some users might only do part 1-3 of your application. Some might just login to check some details (10%)
Some might do a full E2E flow, so you’ll have to divide your concurrent users over all those processes.

Which might reveal some interesting finds!

Story time:
I once went to client and they have performance tests but couldn’t find their performance issue.

So what they had was: individual tests for each flow.

Flow 1: 100 users
Flow 2: 1000 users
Flow 3: 250 users

And testing each flow went perfect, all responds times were within the accepted Non-Functional Requirements.

So I mixed them up; I ran Flow 1 and 2 together and BANG.
Due to some back-end process when Flow 1 happens, Flow 2 started to throttle and things got messed up, so long story short, divide and conquer! :stuck_out_tongue:

2 Likes

Thanks! That makes a lot of sense, and I can see the value of testing a mix of different concurrent users, who are doing their own thing.

We are a little while away from that test with what we have now. I’m planning to improve the very basic test already introduced by my Dev teammate. I’m very grateful he made this start while I was away.

1 Like

it’s super cool to see a developer picked this up by himself! Kudos to him! :slight_smile:

What kind of performance tests are you running (or planning to make in the future)? Load/Stress/Recovery/Endurance…?

If you need any insights in the future let me know! I love working on scenario’s like this :slight_smile:

1 Like