Performance Testing Strategy - TestBash Brighton 2018 UnExpo

At TestBash Brighton I facilitated a poster stand at the UnExpo. My stand was titled ‘Performance Testing Strategy’ and I split my poster into 4 categories: Types of Tests, Metrics to Report, What to Test and Tools.

The eagle eyed amongst you may spot ‘Experts’ in the section below, but my discussions didn’t really head in that direction. I added Mark Tomlinson (@markontask) myself.

Better late than never, here’s a list of ideas that people came up with during the lunch break:

Types of Tests

  • Stable Load - a sustained load on the system under test. An ‘expected’ number of users.
  • Small Load - does the system meet our NFRs under a small expected load?
  • Stress - Putting above maximum expected load on the system to identify the point of failure. Helps you identify how much your system can handle before it fails.
  • Soak - a test which runs for an extended period of time. Allows you to identify how your system handles a specified period of concurrent use.
  • Spike Test/Realistic Varied Load - does the system handle going from a small number of users to a large number quite quickly?

Metrics To Report

  • CPU Usage
  • Memory Usage
  • Response Time
  • End to End scenarios - how long it takes to perform a specific user journey.
  • Trends - isolated metrics are great - but is our performance improving (or not) over time?
  • Pass or Fail - I.e. does it meet our NFR?
  • Network Traffic
  • Disk IO
  • Number of passed/failed requests

What To Test

  • Scalability - is our system scalable during a Spike test, for example?
  • Databases - are our database queries quick enough? Can they handle the load from the API?
  • Concurrency - if we have high back-end activity, is our UI still performant?
  • Prioritised Areas - focus on the areas most important to the business.
  • APIs
  • SLAs - are we meeting the requirements that we’ve sold to the customer and are contracted to?
  • Monitoring - is our APM solution reporting what we need? How easy is it to identify issues in Production?

Tools

Other Takeaways

  • Most people appear to be in the same boat when it comes to performance testing. Some companies have dedicated performance testing teams, but other, smaller, companies are having to do performance testing within scrum teams. This often leads to an ‘ad-hoc’ and inconsistent approach.
  • Teams are trying to balance quicker delivery with having sufficient performance testing coverage.
  • A lot of people I spoke to bemoaned the lack of sufficient application monitoring in their production software.

It was great to just chat to people about performance testing. It seems that performance testing in smaller companies is the responsibility of people who may not necessarily be experts in the area.

It’s definitely an area to explore further if it interests you, as it can become extremely valuable to your teams to have performance testing in place if you don’t already have a dedicated performance engineer.

3 Likes

Thank you for sharing this!! Fantastic board of color posties!

I’m recalling a lesson on “safety language” from James Bach where you might list me as an expert, but not “legally” am I an expert. Like lower-case “e” expert, akin to “that person has some expertise” on the subject. :slight_smile:

-mt