Is Test Automation = Regression Testing?

(Jeremias) #1

When I think about test automation, there are only two relevant applications:

  • Performance Testing: Checking how the system behaves under load. It is sensible to automate this to be able to generate the load and make this somewhat repeatable.
  • Regression Testing: Because new features can break existing functionality, regression testing is merely to recheck that existing functionality still works as it did at the time of the actual manual testing.

Do you know about any other uses of test automation that does not fit either definition?

Looking forward to your response!

(Jesper) #2

A lot of testing these days happens under the “shift left” paradigme, where the tests are code and build upfront by test engineers etc. using ATDD especially good for API’s and service layers. BDD along with SpecFlow enables the stories to become test automation code nUnit, xUnit etc.

That being said your tool for regression of dynamic system properties sounds interesting. :slight_smile:

(Jeremias) #3

Great answer, I don’t know how I could forget to mention that.

These tests start of as TDD tests. But after the code fulfills the spec (i.e. after they become green), they become regression tests, also. This is probably why I forgot to mention them, although I should have.

Thanks that you think it is interesting. Would you mind checking it out to give more detailed feedback?

(Jeremias) #4

In another thread, Michal Bolton gave some interesting ideas on other activities that you can automate in the broader field of testing, which isn’t testing itself:

Programs for generating data; for modeling business rules and creating a comparable-product oracle; for obfuscating real-life data; for converting and/or massaging test data from one format to another; for visualizing coverage; for setup and configuration (and for checking whether the system is in an appropriately set-up state); for sorting and searching logs; for obtaining more extensive coverage of specific functions, either by randomizing or iterating through all the possible values for a given setting.

(Lada) #5

I would add that TDD alone, or as part of already mentioned ATDD/BDD, contributes to application design, and as such is more interesting to programmers then to test engineers.

(John) #6

I’ve used test automation to generate test data. If your app processes orders, you may want to generate a bunch of orders in different statuses, that you want to do manual or exploratory testing on.

I’ve also used it to see if memory leaks occurs on a particular operation. Actually, after I first suspect memory leak is occurring, I then write an automated test to loop the operation and see if the memory leak occurs on that op.

(Juan) #7

I think there are three of testing scenarios that suits very well with the automation. Performance and regression ones as you point out and also API testing (most of the times).

(Abhishek) #8

My Golden rule is to automate everything which is iterated.

Apart from performance and regression, below are few points that we have achieved:

We have built a new in-house web application replacing old 3rd party app… testing requirement was to test all live customer(almost thousands) are eligible in the new system before roll out. That’s been automated.

Another example would be, health check of mutiple environment. There are different team performing different actions on diff env- (like development in dev env, integration testing in another CIT env, performance in NFT env and many many more)… we can automated with CI with visual dashboard for entire team to verify the status of env’s.

configuring data using API calls, resetting test data to previous state using API calls.

compatibility testing is literally impossible manually in 100 combination of OS, browser and screen size. Automation integrated with various tools like saucelabs or perfecto.

Visual tests are replaced by applitools in my team.

Many many more.

(KomalC) #9

I would like to share a blog I read on Reddit - which will give you a brief overview of test automation flow in the connected world.

(Laurent) #10

Really depends on what software you are testing actually. On my side, regression tests include performance (latency for our business), integration tests (with other applications), and regular features that should not break.
I would probably add integration tests suite that developers should run before releasing to QA :slight_smile:

(Giuseppe) #11

Simple question: Is regression testing=test?
Can you define me/us “your definition of regression testing” ?
What are you evaluating with the regression testing?
Is testing or checking?


(Joe) #12

I agree. We shifted most of the testing left in a recent API development. Between the unit tests and those we created through SpecFlow, the project team had a lot of confidence in the outcome of this suite of tests. I think of them as a regression suite. Having the ability to run them at any time is also a confidence builder!

(Jeremias) #13

Hello Giuseppe,

I am afraid I do not understand what you mean with “Is regression testing=test”. Can you elaborate on that?

My definition of regression testing is “testing something again, that worked the last time”. Since I talk about test automation, according to the definition of Bach/Bolton, I only talk about “checking”. As far as I understand, it is still called “regression testing”. Although I wouldn’t even call it “regression checking” but version control (please see discussion here: Your feedback needed on a different testing approach).


(Kris) #14

Definitely one use is smoke testing. Like pre and poat release quick checks of standard functionality to check nothing has been broken. Like these can run with some build tool every time a some code is checked in.

(Christina) #15

Once I’ve found that a function will work I’ve used automation to run edge case data to ensure all validation fires successfully. This gets re-used for regression in some cases but is mainly to speed up tedious checks with numerous data variations or navigation options.

(Mark) #16

To me regression testing is something that we perform to make sure that nothing has changed. It was originally designed to deal with brittle codebases that were constantly being changed to fix bugs, but had no additional functionality added. I’d like to think that on modern software projects we strive to have codebases that aren’t brittle, and don’t need loads of bug fixes to keep them running smoothly.

As a result of my definition above; if you use test automation as regression testing, that’s probably an indication that your automated checks are in a separate codebase and isolated from the application they are testing. There is a good chance that these checks are used as a gatekeeper to highlight changes and stop code from moving into an environment until somebody has checked that these changes are desired. This is old fashioned thinking.

In modern software development, as the application we are testing is modified our checks will need to change to work with the new functionality. For this to happen our checks should live in the same codebase and they should be run as part of the default build. This means that the checks will fail on the developers machine as soon as they make a change and build the project locally. That means that the developers will need to fix/modify the checks before pushing code into master and triggering a CI run.

This now no longer meets my definition of regression testing since we are no longer running the checks to make sure nothing has changed. Instead the checks are documenting how the codebase works. As we make changes to the codebase we change the checks (or in other words the documentation).

Our checks are no longer used as a gatekeeper that constantly says “no, you’re not on the list”. They are instead living documentation that describes how the system works and is constantly updated as the system evolves.

Test automation == Living documentation!

(Branka) #17

Mark, I love the notion of test code being the requirements.

What about teams that are not running their tests before they check in due to a lengthy test suite execution, which is usually executed on the Selenium grid making it much faster then if it was running locally? In that scenario you can still have your tests living in the same code base, just running as part of CICD after the app is build, and deployed to an environment. I would still call those “regression tests”, they are just triggered when new code is checked in.

I think it would be amazing if all the tests could run before the code gets checked in, but in reality that would slow down developers… Possibly doing checkin to a branch, build, test, if it passes merge into a real dev branch would work?