Performance Testing: What to Consider Before Starting?

When I started performance testing for a product I was working on, I remember thinking “well this is slow”. But I didn’t know where to start.

I used Lighthouse to give me an idea of what exactly was slow, the low hanging fruit so to speak.

When it came to running JMeter scripts though, I wasn’t sure exactly what the goal should have been and the team wasn’t either.

What are some things you’d consider before starting with the performance tool? What questions would you ask to help guide your approach?

1 Like

I’d suggest starting with the question(s) you’d like to have answered, which for me has usually been some combination of:

  • Does this app feel fast enough to end users for a pleasant and productive experience?
  • Can the system handle the level of demand we expect during a major event e.g. Black Friday?
  • Are the system’s invariants maintained when it is pushed near its saturation point? Does it lose or corrupt data? Does it remain secure?
  • How much CPU/RAM/other resource do I need per n users in order to provide a good level of service?

If you’re focussed on the first of these, you’d probably end up spending most of your time in tools like Lighthouse and the Chrome performance tab. I’ve found it helpful in the past to try to figure out benchmarks from other, similar systems to determine what you should consider “good” performance. E.g. if you’re building an eCommerce website, you’ll want to ensure you’re at least in the same latency ballpark as your competitors for product page loads, image rendering, search etc.

3 Likes

Non-Functional requirements, this is something nobody knows when you ask about them. Before starting to performance test, you have to make clear what the requirements are. If you are going to test the performance and your API takes 100ms with 100 users You still don’t know if it’s good enough or not.

Many people always ask ‘can you performance test this for us’ but it’s never clear what kind of performance test they want, so clear it up before starting to write something. Do they want a stress-test, load, endurance, peak, volume, … etc

Kind regards
Kristof

5 Likes

NFRs sometimes feel like a chicken-and-egg scenario, since in most cases a business or procurement team will have little idea of what a sensible target is and may not even have a mock-up of an app to be able to work out number of scenarios to cover. The literature always covers something easy like ‘a login’ or ‘buying a product from online store’!

1 Like

Performance testing one scenario where it need more planning and wider broadcast than any other; may be beta testing(its an event for some companies) if I recall.

Go through requirement doc and find -

  • What is the expected response time for end user?
  • What is the X or max load expected in a defined longer duration(i.e 1 year, 2 year, 5 years)? This will decide what would be your max X test and pattern on which user base is growing.
    This will give shape to your performance code.

Few more:

  • Do we have correct monitoring and logging system to analyse both app log, system health, client side data, geo location variance (in case applicable).
  • Do we have dependent system? When we push the load for testing, will they able to handle the requests? Whats the alternate; may be mocks?

Is it imperative for the product owner to specify the performance requirements they are aiming to achieve via this type of testing? Been using Jmeter for simple tests to gauge the response times, latency etc, feeling stuck with how to move forward with timed requests.

Like everything else, you should have a strong why/problem you’re trying to address. Performance testing just to tick a box is make-work.

Goals such as having a webapp that feels snappy/responsive, are very different from testing whether a system can support some level of concurrent users, having SLO/SLA type numbers for APIs, throughput requirements, etc. I don’t do a lot of performance testing, but usually it’s either because someone has said something is slow and we’re trying to figure out where the bottlenecks are (e.g. network, persistence, computation, etc), or we have actual SLO/SLA numbers we need to hit/maintain. Depending on what you’re trying to solve or answer, the approach will vary wildly.

2 Likes

My team is facing exactly same situation so, I started with basic training/videos on load/performance testing strategy and planning. There is one course on TestAutomationUniversity from Amber Race which provides a good idea.
Also going through some case studies related to performance improvements might give some info on the approach and implementation that teams have done.
Working with devs/architect to analyse APM tool or production logs can help the team to work towards NFR.
And, exploring Jmeter/other tool while you do all this.

1 Like