Very interesting topic!
Performance testing benchmarks are always a challenge. This can be addressed as follows:
First thing, your customer can give you ballpark figures of their benchmarks based on similar applications in the market. For example, let’s say a similar application in the market can handle 200 users at a time, the natural inclination of your customer would be to handle that many no. of users, or more for the competitive advantage. If there are no such benchmarks from the market, then your customer can set the expectations, they being the pioneer in the market.
Next, based on the performance levels expected (no. of users, no. of screens that need to be active, no. of API requests that can be processed at a time, etc.), you can come up with a table of load and stress expectations for CPU, memory, storage access, etc.). For example, something like:
No. of users CPU Memory Storage
25 40% 25% 10%
50 60% 40% 25%
and so on, for each of the performance parameters. If you don’t have such expectations upfront, you can go with what your application is doing right now, and see if the parameters are in the acceptable range.
Finally, you meet or outperform expectations as your software keeps upgrading in terms of better code quality, performance tweaks, better infrastructure, better operating system support, etc.
I hope that helps. Please do let me know if you have further questions!