Liveblogging TB Brighton #6: Continuous Performance Testing by Eric Proegler

Continuous Performance Testing

By Eric Proegler @ericproegler

“Hi! I’m Eric and I’m a dinosaur”

Eric is talking today about Continuous Performance Testing. Some of his dinosaur friends are still focused on a traditional way of performance testing. (The old way: In one set of experiments, determine whether the completed, deployed system will support: (lots of guesses/assumptions)

Eric advocates for a different approach.

Purpose: Rapid feedback about risk from automatically executed test and tools to provide information

There are a number of Performance Risks to be addressed:

Scalability

Capability

Concurrency

Reliability

“If you are having any conversations about testing and you are talking about tooling, you are probably not talking about testing.”

“Performance Testing Tools are essential crappy automation tools, but multi-threaded.”

Erik talks about his oracles:

To evaluate an automated test you need:

A reliable measurement (or oracle) for determining whether the test passes, fails, or needs further investigation

An expected result for context – what was supposed to happen in this context

A way to validate this measurement when investigating problems

A reproducable, consistent set of conditions (state) to take the measurement under

Erik gives some thoughts on how to cope and embrace unreality:

  • Horizontal scalability makes assumptions – let’s use them
  • Who says test system have to be distributed the same way?
  • Isolation is more important than ‘real’
  • calibrate

Embracing Unreality:

Now that we are not pretending to be ‘real’:

Scalability: add timers to functional tests, record response times and trend them

Capacity: stair stepping until curve is found

Concurrency: no think time, burst loads of small numbers of threads

Reliability: soak tests – run tests for days or weeks

Test Design: Eric advocates to simplify your tests

Read his slides if you are interested in more details!