How do you start performance testing for apps not yet in production?

Hey All!

In my workplace we started our performance testing initiative to monitor any interesting trends in our product such as load or CPU, memory usage. This is a new application that has not been in production yet, so I don’t have much to go on what kind of load or usage to expect by real users.

I was just wondering what your thoughts are or have ideas on how other companies approached performance testing for a new microservice or product.

Did you ever find your performance tests helpful to reduce the number of production incidents?
What kinds of processes did you put in place for this?

Thanks for reading and have a great day! :grinning:

Tetra Quality


My suggestion for making a load profile is to go to the business. They should be able to

  • give some rough estimations about the use:
    “We expect that 4,000 people will use it during a week.” or
    “About a third of the users of feature A will use this new feature.”
    It is important to estimate the number of usage per hour.
  • give some information about peak use:
    “In the week for Easter we expect the number of usage will triple.”
    My advice is to add a percentage to this number for the load.

Other features in use could already give some indication what you could use for infrastructure.
“The new feature is like feature A, but it will do a bit more calculations.”

If a product must be tested with several features, then click paths must be determined.
“After using feature A there is 40% chance that feature A will be used again and 60% chance that feature B will be used.”
These paths can be implemented in the performance test. It is not always possible to implement all clickpaths. In this case the paths which use the most resource should be implemented.

Performance tests can be helpful to give a proper indication of what to expect in production. The biggest challenge is to interpret the test results. If a CPU is at 95%, is that good? I do not know. You need a good performance tester to help you.

In the past I coordinated a performance test for a website. I used a hybrid approach: the users were simulated by the performance test tool and the website administrators were real people, who did a lot of actions in the background. I started on Friday. The backup in the weekend was also included. On Monday the website administrators experienced a bad performance.

In one company they used a nightly performance test with the biggest production like input they could find. The application was automatically built, deployed, and executed. The next early morning developers would look at the data and trends.


Yep all of this. And then you ideally need to have a pre-production environment that is a good enough match to production to give you useful results. We call this our staging environment and is setup to be the same as production as much as possible.


Ahh performance testing, my favorite testing tool to really break things.

At my last job the test env was so different from PRD that we ended up doing the performance test on PRD at night, true YOLO-style. Since our new software was talking to a brand spanking new database it was considered “safe”. We ended up breaking that database, so the test was def worth it. After that, the managers were paying attention and shit got done to improve that database. I’m not sure if I’d recommend this method…but if your test env is wildy different form PRD, what do you actually learn if you do the test on TST?..tough choices. (In case you’d like to know, the tool we used here was Gatling, which is programmed in Scala. We did the performance tests from our backend, a Kotlin micro-service hipster thingy.)

I also have a big fail story in this regard. This was 7 years ago; my team was doing all the performance tests and everything seemed fine. We went live and shit went kaput because we FORGOT to performance test a mail-server. In our software, people had to sign up for an account and then click a confirmation link in an email that was sent to them…mail server choked under the load and the mails took up to 7 hours to arrive, people were big mad. “Well, what the fuck do I do with this story?”, I hear you think. This is my invitation to you to consider the scope of your performance testing strategy. Are there things in the periphery that you might forget that can have a big influence?

Performance testing is extremely hard to do well in my experience. So many factors that can influence the outcome and it’s hard to pinpoint if you can do something about it. Good luck!


One of the hardest elements for me is determining what satisfactory performance looks like.

Automation is so used to binary pass/fail whereas performance is a subjective entity.

You can’t necessarily ask a stakeholder how quickly a page will load - I doubt they’d know and, to be honest, if they gave me an answer, I’d ask why that number?

I would say, in the absence of requirements, to consider performance as you go along and try to build usage profiles to test with. Then, testing can work on a baselining premise - test what you have, present the findings and then it can be determined if that’s satisfactory. If so, then you could consider degradations as a “fail”.

UI performance is also a big thing that tools often neglect - perceived performance is a big topic on its own relating to user experience and is less about the raw request/response times that performance testing tools work with.


Thanks for sharing Han! I’ll keep these in mind.

There are a lot of external integrations in this project like an email delivery system, and other systems so I feel like it might be difficult to find what user processes to isolate or get good results out of these.

You are right about how to interpret the test results as I am getting that feeling now with getting some initial results and if the data I’m using reflects what the expected request load should be like. I don’t have much experience looking at these trends. Luckily we have some performance testers in the workplace but they are usually spread thin through helping different product teams.

Cool, thanks for sharing!

1 Like

Very entertaining stories Maaike, that prod experience sounds insane! Thanks for sharing and the advice! Definitely this project is a beast on its own with several types of integrations, I’m finding it hard to know what to isolate, or include in my performance test strategy because there’s several use cases that span many systems and there’s only a few more weeks tops to look at this :frowning:

And we are considering Gatling for our tests! I’m hoping my learning experience can be transferrable to help train some of our devs to write these too!

@moorpheus - Very true, thanks for the advice and I agree with starting with the baseline approach. As for front-end performance that’s definitely been something to prioritize in a world of dynamic web applications written in beefy frameworks like Angular. Lots of processing is done on the front-end nowadays which we will also need requirements for.

Hey Tetra Quality,

My team has created a tool for this kind of mobile performance testing called Apptim. It’s a CLI tool that is multi-platform (can be installed in Linux, Win or Mac) and run in CI/CD using real devices (+150 different Android and iOS to choose from). You can use it for manual testing and then when you are done, it generates a report of what it finds, measuring app render times, power consumption, resource usage, crashes, etc. I invite you to try it!

1 Like

Very interesting topic!

Performance testing benchmarks are always a challenge. This can be addressed as follows:

First thing, your customer can give you ballpark figures of their benchmarks based on similar applications in the market. For example, let’s say a similar application in the market can handle 200 users at a time, the natural inclination of your customer would be to handle that many no. of users, or more for the competitive advantage. If there are no such benchmarks from the market, then your customer can set the expectations, they being the pioneer in the market.

Next, based on the performance levels expected (no. of users, no. of screens that need to be active, no. of API requests that can be processed at a time, etc.), you can come up with a table of load and stress expectations for CPU, memory, storage access, etc.). For example, something like:

No. of users CPU Memory Storage
logged in

25 40% 25% 10%
50 60% 40% 25%

and so on, for each of the performance parameters. If you don’t have such expectations upfront, you can go with what your application is doing right now, and see if the parameters are in the acceptable range.

Finally, you meet or outperform expectations as your software keeps upgrading in terms of better code quality, performance tweaks, better infrastructure, better operating system support, etc.

I hope that helps. Please do let me know if you have further questions!



Very interesting and useful, thank you!

1 Like

Salesforce performance testing should be an integral part of a product’s testing regimen, right from the beginning. Software can succumb to a wide range of performance problems that impact user acceptance. Salesforce Performance testing teams can find and remediate issues as early as possible with regular testing throughout the application lifecycle.
Salesforce performance testing should capture metrics on an application to ensure that it works within acceptable parameters, including:

  • speed
  • response times
  • latencies
  • resource usage
  • reliability

The more closely the testing environment matches the production environment, the more accurate the performance benchmarking results. However, creating a test environment that is an exact replica of the production environment is practically impossible.
Although most companies avoid testing in production (TiP) based on the potential impact on real-world user activities and data, testers can reduce the impact by following team-based and process-based best practices.

TiP ensures that the:

  • Expected load is supported by the live environment
  • End-to-end user experience is acceptable
  • Network equipment or the CDN can adequately handle the anticipated load

When it comes to testing in production, testers need to proactively monitor the application under test. By monitoring, we are not referring to retrieving technical counters on the architecture, but measuring the end-user performance on a regular basis. Synthetic monitoring, for example, has the advantage of allowing QA to run one single user journey from several locations, all the while alerting testers about abnormal response times. Monitoring helps operations identify and resolve production issues without these issues having to be detected by real users.

Also, if the environment of the application under test is the same as it will be in production - the approach should not be different. However, if the environment of the application under test is different, i.e. has less servers, the servers has less memory, etc. - in the absolute majority of cases you will not be able to calculate and predict the performance on more powerful hardware as there are too many factors to consider.