Hi, this is my first post, and I have a question about automated regression testing.
We’re embarking on a test automation project, to build-up a suite of regression testing to replace (~80% hopefully) manual regression testing, and we have a couple of opinions about one aspect of the approach.
Approach (A) is to insert randomised test data into all appropriate fields (perhaps in, on, and beyond the field character limits per field, and include special characters). I would describe this as a slightly aggressive form of testing - it is purposefully designed to find bugs that have not yet been discovered.
Approach (B) is to use non-randomised (perhaps more lifelike customer) data, with the intention being to keep everything consistent between executions, with the aim of picking up regression issues where perhaps code has changed (but the inputs haven’t). This could perhaps be described as a little more ‘passive’.
There’s obvious pros and cons to both approaches. Option (A) may find new defects where option (B) might not, but option (B) is arguably going to highlight regression issues more clearly.
Does anyone have any thoughts on this, or experience of considering either / both options?
Any thoughts on what might be the best approach (or indeed how both approaches might be combined) would be welcomed.
For a little context, the SUT is a fairly old (10 years+) and complex Windows desktop application (let’s not talk about the fun to be had with identifying objects!)