Live Blog Testbash Germany: Performance Testing of Web Applications - so Much More than Running a Script by Igor Samokysh

Ok, last talk before lunch. Igor is going to talk about performance testig of web applications. I am excited and interested to learn more about this!

We’re starting with a picture of a kitchen. This is odd. But I’ll go with it. When he goes shopping, he can decide what food he can put on which shelves. The kitchen is going to be an analogy. I like analogies. This is good.

Before we start, Igor is talking about memory palaces. It’s a technique for memorising things using a familiar place (like a kitchen). That’s going to be relevant.

As an introduction, he’s reminding us of web based applications. We have our browser (client) and our server. The client sends requests and the server sends responses. One way of performance testing is to check response times.

When he was first asked to look at performance testing, he found JMeter and started writing some scripts. He had a feeling that this wasn’t quite right though (this really is a recurring theme, that jumping in with a tool isn’t a great idea). He didn’t have any methodology. And there is one – there are seven steps.

The first step is to identify the test environment. In the kitchen analogy, this is the kitchen itself. The environments they had in his project were “test” and “production”. It’s incredibly important to understand similarities and differences between these environments, in the categories hardware, software and test data. A simple table can help you identify what parameters you have and how they are different between environments. If you are unaware of these differences, your tests won’t help you as much as they could.

The second activity is to identify performance acceptance criteria. Which KPIs or scenarios are important? Igor tells us to associate this activity with the dishwasher. If you’re selecting a new dishwasher, you’ll probably look at how quickly it washes (response time), how many cups fit in (throughput) and how much water it uses (resource utilisation). For your project: what response time is ok? Which throughput is acceptable? How many users? We can get this information from your company/industry standards, from government requirements, by collaborating with team members (the most commonly used source) and also common sense.

The third activity is to plan and design the tests. In the kitchen, this is the fridge. Before we start to cook, we check in the fridge to see what we can cook from it. Usage scenarios might be potatoes (business critical for making fries), sugar (very commonly used), obligated by law and highly visible (sorry, missed the foodstuffs for those…. Also, I’m getting hungry!). Using these four elements guides which tests we’re going to design. For commonly used, tools like google analytics can help you see what gets used. To achieve realistic results, you might need to emulate random delays for user actions for example. Because not all users have the same delays. In JMeter, you can add a timer into the script.

The fourth step is to configure the test environment. This is the coffee machine (which also need a lot of configuration! So true). This activity can be split into two things:

  • Check the environment from where you generate the load (it needs to be powerful enough to generate the data and load you need).
  • Check the environment you will actually be testing (use load balancers, and know what the difference to production is. Also make sure you can reach someone from support quickly in case you break your test environment by overloading it)

The fifth activity is to implement the test design. This is what most people start with – and they will have missed the steps before! The analogy for this is actually cooking. If you’re implementing a dish, you’re cooking it!

The sixth activity is to execute the test. This is the kitchen table. When dinner is ready, you notify your family, smoke test the dish, begin the food execution, review the taste with your family and then archive the food (leftovers). (This is how I’m cooking from now on!!! What dinner table metrics and KPIs do we need?! :wink: ).

In performance testing terms, we notify the team, do a smoke test of a small part to see whether it’s going to work, begin the execution, review the results and then archive.

The final step is analyse, report and retest. This is where we look at sorting our rubbish. We analyse whether something is plastic or paper and throw it away correspondingly. When we analyse, we can read files or generate graphs. With graphs, it’s much easier to represent and understand the information. At the same time, it can take a while to see what they’re saying. Spend time understanding them! He said that, and then my math-averse brain zoned out while he was talking about percentiles. Whoops. Moving on! For your reports, make sure you know your audience! Tools like Grafana and BlazeMeter can be nice to visualisation – but make sure you know who will be reading the reports. Finally, retest! Performance testing needs its own regression cycles too. When we change things related to performance (or, in my experience, even things we think aren’t related to it!), we risk changing things for the worse.

This was a fantastic look at performance testing, and I actually think I’ve remembered some things. Great, I have a mind palace for performance testing! Thanks Igor :wink:

4 Likes

Seems I’m missing some great stuff today. Hope to make a Testbash some year!

Now I have to decide if I’m going to…
a. Use Kitchen Performance Indicators (KPI) at work or
b. Use Dinner Table Metrics at home.

Either way, I think the analogy is brilliant, and worth revisiting.

In the future, when I get done with test design, I will probably say that my brain is fried.
Can a well performed performance test be referred to as “well done?” (This might be important to the Steak holders)

2 Likes