To me regression testing is something that we perform to make sure that nothing has changed. It was originally designed to deal with brittle codebases that were constantly being changed to fix bugs, but had no additional functionality added. I’d like to think that on modern software projects we strive to have codebases that aren’t brittle, and don’t need loads of bug fixes to keep them running smoothly.
As a result of my definition above; if you use test automation as regression testing, that’s probably an indication that your automated checks are in a separate codebase and isolated from the application they are testing. There is a good chance that these checks are used as a gatekeeper to highlight changes and stop code from moving into an environment until somebody has checked that these changes are desired. This is old fashioned thinking.
In modern software development, as the application we are testing is modified our checks will need to change to work with the new functionality. For this to happen our checks should live in the same codebase and they should be run as part of the default build. This means that the checks will fail on the developers machine as soon as they make a change and build the project locally. That means that the developers will need to fix/modify the checks before pushing code into master and triggering a CI run.
This now no longer meets my definition of regression testing since we are no longer running the checks to make sure nothing has changed. Instead the checks are documenting how the codebase works. As we make changes to the codebase we change the checks (or in other words the documentation).
Our checks are no longer used as a gatekeeper that constantly says “no, you’re not on the list”. They are instead living documentation that describes how the system works and is constantly updated as the system evolves.
Test automation == Living documentation!