General/Common performance requirements

Hello,
I am trying to create a list with general performance requirements that are always aplicable for every performance test that we run, like some conventions regarding response time for frontend/backend applications.
Can you help me with some of these requirements? Thanks.

This is going to be entirely dependent on your application. For example we set an expectation there is “a response” within 1 second max across the board, but sometimes that could be appearance of a loading spinner for a bulk/complex operation which is going to take minutes to complete.

The other side of this is how your application handles load. Major load could be 10 concurrent users for some small apps, and millions of concurrent users for big ones - again, it’s entirely contextual.

This is a UX problem more than a testing one - performance testing can show you what your application does, but not necessarily speak to what it should do, what your users expect it to do, what similar applications do etc.

2 Likes

First and foremost I think you are on the wrong quest. Trying to find something that is always applicable no matter what is a good way to set yourself up for failure.

But since you requested it I will try to unpack your topic and see if you can find anything there.

Performance is a multidimensional beast with many purposes. Some of which are:
Leaking - Does our application run without any unintentional side effects that accumulates over time.
Scaling - Can our application handle the load without failures.
Experience - Does the application respond in time to not interfere with user actions.

Lets start with the easiest one. Leaking. This is the one where it is easiest to setup a rule saying “No leaks” thus you can test that your application never leaks in any circumstance. In reality I think you should set a requirement saying that the application leaks are significantly smaller than the expected reset rate. I.e. if the leak will take more than a month to cause a problem under the expected usage and you reset it every day due to upgrades / restarts or whatever then that leak is not a problem.

Scaling is not a right or wrong kinda thing. What you need to define is what constitute as a failure and what are your accepted rate. For most applications in the world. 100% success rate is way to expensive. As an example of a failure I was working with the telephone system and it was shown that if the sound delay is above 500 ms then people started to change how they talk to each other. As in you do not know if the other person heard you or not because the acknowledgement was to slow so you would start to repeat yourself at the same time as the other person responded. So in that system a delay above 500 ms is a failure.

User Experience is similarly also not a universal thing. The patience a user has for a certain operation varies a lot. On a online store we can measure that the number of people giving up (because it is to slow) and from a business point of view you want to decide what an acceptable number is in comparison to the cost of making it faster. For a medical system accuracy might be more important than speed and for a banking system security is typically more important than usability. It is all about trade-offs.

With all of that in mind and that you need to specify what dimension that are of most importance two things that you can apply in a lot of situations is:

  1. No unintentional side effects - No errors in the logs, no leaking of resources etc.
  2. No unintentional degradation in performance - The next iteration of the application should be on par with the previous (given that the change was not specifically targeting performance)
2 Likes

I almost wish I could give you two likes, so this is as close as I can get.

2 Likes

You may also refer to this article

1 Like

:slight_smile: It would not be the first time and it wont be the last :frowning:

This is exactly what I was looking for [srinivas1]. Of course it is a mistake to have specific requirements that can apply for all the performance tests but there are guidelines that one needs always to take into account :slight_smile:

One question, when you talk about enviroments, are you refering to where the system under test is located or where the load machines are located? I guess that the load machines are not located in the same enviroment as the SUT.