Regression plans for specific areas of a web application

Hi all,

So, I recently missed a bug that slipped into production - we had made a change to the validation on a web page and it caused an adverse knock-on affect to the validation behaviour. When I reviewed my test plan and regression pack I realised that I didnā€™t actually have any test cases in places that could have caught this particular bug, which got me thinking:

How about in addition to my high level regression test cases I also come up with ā€˜mini regressionā€™ cases for each area, for example: validation - I would have an exhaustive list of things to check for this area, including:

ā€¢ Special characters
ā€¢ Long string
ā€¢ Empty
ā€¢ Spaces
ā€¢ Numeric/alphanumeric
ā€¢ Numeric variables (commas, decimal places, leading zeros, negative numbers, )
ā€¢ Trigger validation by leaving fields blank ā€“ when completing fields ensure that validation messages disappear

Before I go down this path, does this seem like a good idea or is there a better way I could do this?

I am currently trying to think of as many key areas for testing websites as possible. At the moment Iā€™ve got:

ā€¢ Validation
ā€¢ Fields
ā€¢ Text
ā€¢ Links
ā€¢ CSS/styling
ā€¢ Buttons

Iā€™m sure there will be lots more! The idea would be to maintain this regression pack and only pull in the relevant ones I require for each project, eg: if we have changed the CSS then I would run that regression.

Thanks,

sad_muso

Iā€™ll translate what youā€™re doing into terms I understand. It sounds like youā€™re building risk catalogues so that you can more effectively tackle change risk. This is a sensible thing to do, and is a part of outside-in risk analysis.

Donā€™t lose sight of the purpose of your particular website. Test against risks that are informed by your site and its needs. Maybe it simply has to be performant, or must be secure, or has to be well formatted, or maybe you require maximum uptime. Beginning with the details and establishing risks is inside-out risk analysis.

Beware of premature formalization. Writing everything down can be a costly burden. Being general allows you to make your coverage more fuzzy, so you donā€™t have well-defined holes in it. A ā€œtest caseā€ such as ā€œsecurityā€ covers a lot more than ā€œwhen you click the log on button and the username field has a valid username thatā€™s in the database with a password in the password field that has an encrypted version in the database against that username then the user is given a tokenā€¦ā€. Checklists are a great way to fill in the gaps - things that are sufficient risks to warrant repeating actions. Youā€™re not aiming for perfect, youā€™re aiming for good enough - and this partly means prioritizing risks and properly using resources. A list of risks is a great way to start - I use the HTSM a lot to help me with this during testing.

Rather than give you a not-exhaustive list which probably wonā€™t fit your context anyway, I will send you away.

Go look at testinsane mindmaps. (Test insane link to a potentially useful mindmap)

Test insane is a place where testers can share mindmaps which they think are useful. I have found it a great place to get ideas which I can wrap around my tests to see where I can improve them. Donā€™t forget to look for other, potentially more useful mind mapsā€¦ I just linked the first one I found which might be useful.

(I am not involved with TI, I just find it a great exploratory testing tool)