Iâd suggest starting with the question(s) youâd like to have answered, which for me has usually been some combination of:
Does this app feel fast enough to end users for a pleasant and productive experience?
Can the system handle the level of demand we expect during a major event e.g. Black Friday?
Are the systemâs invariants maintained when it is pushed near its saturation point? Does it lose or corrupt data? Does it remain secure?
How much CPU/RAM/other resource do I need per n users in order to provide a good level of service?
If youâre focussed on the first of these, youâd probably end up spending most of your time in tools like Lighthouse and the Chrome performance tab. Iâve found it helpful in the past to try to figure out benchmarks from other, similar systems to determine what you should consider âgoodâ performance. E.g. if youâre building an eCommerce website, youâll want to ensure youâre at least in the same latency ballpark as your competitors for product page loads, image rendering, search etc.
Non-Functional requirements, this is something nobody knows when you ask about them. Before starting to performance test, you have to make clear what the requirements are. If you are going to test the performance and your API takes 100ms with 100 users You still donât know if itâs good enough or not.
Many people always ask âcan you performance test this for usâ but itâs never clear what kind of performance test they want, so clear it up before starting to write something. Do they want a stress-test, load, endurance, peak, volume, ⌠etc
NFRs sometimes feel like a chicken-and-egg scenario, since in most cases a business or procurement team will have little idea of what a sensible target is and may not even have a mock-up of an app to be able to work out number of scenarios to cover. The literature always covers something easy like âa loginâ or âbuying a product from online storeâ!
Performance testing one scenario where it need more planning and wider broadcast than any other; may be beta testing(its an event for some companies) if I recall.
Go through requirement doc and find -
What is the expected response time for end user?
What is the X or max load expected in a defined longer duration(i.e 1 year, 2 year, 5 years)? This will decide what would be your max X test and pattern on which user base is growing.
This will give shape to your performance code.
Few more:
Do we have correct monitoring and logging system to analyse both app log, system health, client side data, geo location variance (in case applicable).
Do we have dependent system? When we push the load for testing, will they able to handle the requests? Whats the alternate; may be mocks?
Is it imperative for the product owner to specify the performance requirements they are aiming to achieve via this type of testing? Been using Jmeter for simple tests to gauge the response times, latency etc, feeling stuck with how to move forward with timed requests.
Like everything else, you should have a strong why/problem youâre trying to address. Performance testing just to tick a box is make-work.
Goals such as having a webapp that feels snappy/responsive, are very different from testing whether a system can support some level of concurrent users, having SLO/SLA type numbers for APIs, throughput requirements, etc. I donât do a lot of performance testing, but usually itâs either because someone has said something is slow and weâre trying to figure out where the bottlenecks are (e.g. network, persistence, computation, etc), or we have actual SLO/SLA numbers we need to hit/maintain. Depending on what youâre trying to solve or answer, the approach will vary wildly.
My team is facing exactly same situation so, I started with basic training/videos on load/performance testing strategy and planning. There is one course on TestAutomationUniversity from Amber Race which provides a good idea.
Also going through some case studies related to performance improvements might give some info on the approach and implementation that teams have done.
Working with devs/architect to analyse APM tool or production logs can help the team to work towards NFR.
And, exploring Jmeter/other tool while you do all this.