Randomised Data vs Lifelike Data

Hi, this is my first post, and I have a question about automated regression testing.

Weā€™re embarking on a test automation project, to build-up a suite of regression testing to replace (~80% hopefully) manual regression testing, and we have a couple of opinions about one aspect of the approach.

Approach (A) is to insert randomised test data into all appropriate fields (perhaps in, on, and beyond the field character limits per field, and include special characters). I would describe this as a slightly aggressive form of testing - it is purposefully designed to find bugs that have not yet been discovered.

Approach (B) is to use non-randomised (perhaps more lifelike customer) data, with the intention being to keep everything consistent between executions, with the aim of picking up regression issues where perhaps code has changed (but the inputs havenā€™t). This could perhaps be described as a little more ā€˜passiveā€™.

Thereā€™s obvious pros and cons to both approaches. Option (A) may find new defects where option (B) might not, but option (B) is arguably going to highlight regression issues more clearly.

Does anyone have any thoughts on this, or experience of considering either / both options?

Any thoughts on what might be the best approach (or indeed how both approaches might be combined) would be welcomed.

For a little context, the SUT is a fairly old (10 years+) and complex Windows desktop application (letā€™s not talk about the fun to be had with identifying objects!)

Cheers,
Kevin

2 Likes

Whatā€™s the goal? How much value is there to checking the input validations and/or downstream affects of bad input?

Personally, Iā€™d start with happy path tests with known inputs/outputs. Once thatā€™s done (or the unit and integration tests already give you a fairly high level of confidence), then Iā€™d start doing some of the randomization you mention, pushing towards a kind of e2e property based testing.

1 Like

Hey Ernie,

The very broad goal is to replace as much manual regression testing as we possibly can, with automated testing, but beyond that, itā€™s all up for discussion.

Itā€™s a good question you ask about the value of the effects of bad input - I donā€™t know the answer to that but itā€™s certainly an important consideration, and if itā€™s deemed thereā€™s little risk of randomised inputs causing defects, then thereā€™s perhaps little point in doing it.

I think your suggestion of starting with happy paths with known inputs/outputs is a strong contender.

Do you have any links to info about e2e property based testing please?

Cheers,
Kevin

Hi Kevin.
You are talking full system or E2E here I take it. I imagine having a mountain to climb is daunting, but many paths to the top do exist, and are valid and get you a photo opportunity, but as Ernie points out , what is the real value of testing everything? Do you want a ā€œbang for your buckā€ mountain climbing expedition, with measured progress and multiple basecamps all of equal value along the way to deliver you early wins along the way? Aka incremental improvements strategy.

Randomised data has itā€™s place, but this kind of realistic testing is what our customers do for us, lets face it and do the one thing we can do, well. Happy path automation. This is the kind of automation that will unblock your production pipeline whenever devs or any toolchain changes inject defects, the most quickly. I have over my 10 years at this become a huge fan of automating one ā€œfunctional testā€ case at a time, discretely. Mainly because it incurs less maintenance load than aggressive negative tests do, and gradually gives you component coverage.

Iā€™m not a fan of things like input set validation, because this typically makes tests sensitive to business logic changes, and is something that should be unit tested, so it stops being my department. I am more keen to automate checks that help the team (my team and other teams) to move faster. And if it takes you a week just to get real customer data loading nicely into a test environment, itā€™s probably only going to find one or two defects that you could have found in a manual test environment anyway in half that time.

Sorry if I have merely repeated Ernie, but my Windows desktop app experiences definitely point to simpler things like automating that the installer works and nominally integrates with all the 3rd party apps you support are about as far as you want to go unless you have a large test automation team. I would even go so far as to ask, are you doing any performance testing too? Iā€™m not saying random is bad, but just that random is good for finding a totally different class of defects which can also be found through manual testing with some tooling support added.

Hi Conrad,

I should probably have been clearer, in that the randomised input is limited to text fields only - not other UI elements (e.g. radio buttons, dropdowns etc.) that could potentially take each test through a series of different twists and turns.

Yes - theyā€™re essentially E2E tests (simulating user journeys, creating and verifying new records etc.). Unit test coverage is, I beleive, patchy, so I canā€™t rely on that aspect - so weā€™d like test coverage for even the basics such as field validation - I donā€™t have the luxury of leaving that to others Iā€™m afraid!

I do see us going the path youā€™ve set out - happy path automation - one test at a time - over a period of time building up coverage sufficiently to give us confidence to reduce iteration lenghts, by reducing (not removing!) the need for lots of manuall regression.

I guess the question is whether or not there is any merit in combining happy-path regression tests with randomised text field inputā€¦perhaps trying to get the best of both worldsā€¦hopefully finding defects as a result of code changes, AND finding defects that perhaps have always been present, but as yet undiscovered.

It could be argued that if randomised text is used, that when a defect is found, it might not be obvious if a code-change caused the defect, or just a new input that hadnā€™t been entered previouslyā€¦but the counter-argument is that itā€™s great that a defect is found, and through a combination of logs and/or recent code changes, it shouldnā€™t be too hard to find the cause.

I guess Iā€™m wondering if thereā€™s a reason not to include randomised field input in our tests?

Performance testing is on the listā€¦but we want to keep it simple to start withā€¦but I guess you might be suggesting it can offer real value, and I tend to agree.

Thanks Conrad and ernie for your thoughts on this :slight_smile:

1 Like

Kevin, yes. I just threw ā€œPerformance testā€ in there as a buzzword bingo. Iā€™m like that, a bonus one ā€œSecurity Testingā€?

But Iā€™m keen to work out a way to do what Ernie was hinting at, not get distracted writing code to inject random field contents and then validate. Because not only must you handle text fields, but you really want to handle tickboxes and radio buttons in those forms too, since they really are part of the data and state. This gets complicated quite quickly I guess though. There is merit in random input testing, but itā€™s not as valuable as a happy-path table-driven test suite that just happens to have a CSV file of inputs that contain ā€œRobertā€™); DROP TABLE students;ā€“');ā€
(disclaimer : other good attack example files do exist Good input datasets ) If you are not familiar with table driven testing for input validation, you have not used a GUI automation tool yet. The good tools all support CSV or tables, of input datas; but Iā€™ve never found off-the-shelf GUI tools to work well in all contexts. One thing that table driven testing frameworks let you do is specify a column in the table that defines the outcome:

field1,field2,field3,ACCEPTED
Harry,Oppenheimer,0551234567,PASS
Harry,Oppenheimer,055-123-4567,FAIL
Harry,ā€œRobertā€™); DROP TABLE students;ā€“');ā€,0551234567,FAIL

Everyone has started testing an app only to find that there is just one form somewhere that always fails to save a single field hidden at the bottom of the form on the first submit of the form. Nobody ever reports it because it saves on the second time you click OK, but itā€™s still, a defect! And I guess thatā€™s one of the class of bug you hope to catch this way.
Iā€™d really like for other club members to chip in at this point with heuristics for form input validation testing.

Hopefully not taking this too out of context, but this makes me think that youā€™re treating e2e tests as the hammer and everything else is a nail. Using e2e tests to try and make up for deficiencies lower in the test pyramid is a major smell. Building something brittle on top of an unstable foundation is asking for a maintenance nightmare. If youā€™re real risk point/concern area is input validation, write tests at the proper level to focus on that. In other words, youā€™re likely better off focusing on unit and integration tests if your goal is to get more confidence on input validation/handling.

I use e2e mainly as a smoke test - they verify third party integrations, config, etc, and are super happy path. Theyā€™re not meant to find bugs or even act as regression test; they may do that, but thatā€™s secondary or tertiary to their being high level tests to contribute to the confidence of the product as a whole. (And thatā€™s why known inputs/outputs are fine for this)

1 Like