How do you do regression testing?

There are a few posts on here that discuss defining regression in testing and asking do you or your team do regression testing. I realise that among them we touch on the methods each of us uses for regression testing but we don’t give practical examples that a person could follow. To people in the know, this makes sense I guess but to someone coming completely new to testing I think practical examples would be useful to enable their learning.

So, my question to you: do you have practical examples of regression testing that you can share?

I’ve not fully experienced regression testing at my current place (I’ve only been here a month!) but this is what I did at the previous job around regression testing for a set of templates used for creating e-learning. Because these templates were so designed that they could be loaded to the content server that delivered the e-learning that was built using them, and they would override the version within the learning (thus allowing fixes and improvements to be pushed out automatically), they needed regression testing when changes were made.

As I’d worked on these from the beginning and knew them pretty much inside out, I’d got the test scripts refined down to a good lean path through - in the ideal world these would have been peer-reviewed but I was working for a failing company and largely working alone. These scripts were then used to create an automated test suite; I used Selenium IDE for this as I had no previous automation experience, nor the time or support to learn a language to run Webdriver, so it was the best option for me under the circumstances.

The automation suite took care of the happy-path coverage, which allowed me to focus manual testing onto the areas of change. This would involve cross-browser and device testing (using a combination of VMs, actual devices and Browserstack) as well as pushing the boundaries on the changed templates; again, experience informed what I tried out. I also did two rounds of testing, an initial smoke test on a dev environment followed by a more thorough pass on a change-controlled test environment. Once through this, the prod deployment would take place first thing in the morning and I’d aim to get a quick smoke test done before 9.30 to confirm all was fine - although the change-controlled test environments were supposed to be the same as production, that was rarely a safe assumption!


Good question, I look forward to seeing other responses.

In my last role, we had a process whereby we’d identify the impact of fixed bugs through detailed information provided by the developer upon fixing the issue. We would then have a spreadsheet which documented any retesting (re-running the failed test) and regression (running tests around impacted areas - this would list test cases or exploratory testing).

So, in that sense, it was driven by the defects fixed within a second, third or fourth build (i.e. anything after a “full” round of testing). This was also the situation for any new functionality delivered.

In my current role we don’t have the ability to do this, yet. Hence, regression tends to be “let’s run all automated tests in this area” which isn’t something that sits comfortably. It’s akin to the walnut and sledgehammer.

This is in the context of Regression Testing being “let’s test the impact of a change to the software”.