What’s your biggest blocker to doing performance testing? Are there ways to address the blockers? How do you deal with those blockers?
I’m in to Airline domain, where there are few downstream systems which communicates to third party (like government, fraud and so on) and hence, due to this nature of complexity, there is only limited bandwidth allocated to staging & performance environment to test with downstream servers. As a result, we cant test PT whenever we want .
Moreover, just to give a context, we do have a dedicated performance environment from middleware till a layer before downstream server–which is considered has a biggest blocker #1.
However, we have addressed this as an interim solution, by performing PT in off peak hours #2.
Also, making sure there is no traffic in staging environment (which shares downstream) during off peak hours when PT is conducted #3.
On cloud (and perhaps web) based systems/infrastructure, one constraint is lack of (sizeable) infrastructure. Like either you replicate production scale (or 2x/Nx) for testing, or you test at a fraction of production scale and extrapolate results.
But regardless, managing, bringing up the whole cloud infra at whatever (significant/useful) scale can be hard, slow, and costly.
And RE: the test data comment, that too. But more important than test data is “useful” or realistic enough test data. You can always feed in junk data or replay data into the system, but will it be processed realistically as expected. That’s the hard part, when data has to feel real and is time-based.