How do you decide what tests have value to be run as part of automated regression?

Hello

I’m working on a new project where I think all of the acceptance criteria will be able to be automated early on and supported with exploratory tests rather than using manual tests scripts.
I’m trying to bring writing the automated checks earlier rather than retrospectively adding them into our regression pack at the end of the feature but not sure if all of them should be kept in or not.

If you’re in a position to automate all/most of the acceptance criteria for a project - and there may be a lot of them - do you leave all of these in to be run as part of an existing automated regression pack or how do you choose what has most value?

I’m not in a CI/CD world, and unfortunately, a lot of these will have to be run through the UI due to the nature of the application. I’m wary that leaving all of these tests in may ‘bloat’ the pack and obviously increase the runtime.
Is there a strategy/technique/rule of thumb that you use to define what tests will continue to provide value?
It may be that all of my checks will be valuable, but I’m interested to hear other’s strategies.

Thanks!

7 Likes

I would say it’s highly contextual to the regression testing that you are actually doing, which changes are you investigating, and the impact they have on the system’s components.

Probably the most important feature your automated check pipeline should have is the ability to easily select which checks to run, so the tester can best utilize them in his/her regression testing activity.

6 Likes

It totally depends on the context of your project and the maturity of your team.
Do you write unit tests & do developers perform mutation tests? Do you have a security team, are there performance tests etc etc …

I wouldn’t focus on AC at first to automate. I would focus on business rules and what users mostly do.
You could have the best AC in the whole wide world but if that feature is only used by 0.5% of your end-users. I would prefer to automate another flow first which is covered more often by end users.

The tests on that flow which 80% of your end users take is much more important then the 0.5%.
So that’s how I decide which flows I would automate first. (roughly said without knowing much of your projects-context)

Unit tests > API > UI if you can please try it if you have the luxury to spend some time on it.
If you are writing tests, also think about test data. Try not to make test data through the UI. It will consume to much time.

That’s still ok! You can put your UI tests also in a CI/CD pipeline. Just don’t put any flaky tests in there! :wink:

3 Likes

Hi @marissa,

I like Karen N. Johnson’s Regression Heuristic: RCRCRC.

The mnemonic is RCRCRC used to remember
the heuristic to help with regression testing. Each letter
represents a word I use to help me discover testing ideas.
• Recent
• Core
• Risky
• Configuration
• Repaired
• Chronic

Is that on your radar? Perhaps it’ll help in some way. As in, there’s an opportunity to run an RCRCRC exercise and see what you’d like to not include in your regression approach/tests and maybe that informs what acceptance criteria you’d like to automate early instead of including in a regression pack. The way I’ve described it sounds a bit back to front yet I hope it makes sense.

Best of luck.

4 Likes

@marissa
I think its worth reading the below thread which brings a lot of lights to this topic

1 Like

In addition to the Johnson Mnemonic, I advocate folks looking at Testing selections from a MoSCoW perspective of risk. Then when build times need to be adjusted or rescoped the ranking is ready to go.

2 Likes

On Twitter I found a similar question with answers:

2 Likes

We work in a 2 weeks sprint. The first week, we are mainly busy studying the user-story, designing manual test cases and preparing test data in a master reference database. The prepared data helps executing tests more efficient during a later manual and/or automated regression test.

For example, no big ramp-up time to bring an object into a certain precondition state X before the actual test can be executed. Instead, load prepared object A (whatever that may be), then execute operation “b” and expect the object being transferred into new state Y.

The very next morning, we replace the “dirty” database with a clean copy of the master database, so we can start from scratch, executing the same test manually or automated on the same object.

In the second week, developers start delivering their implementation of the user-stories. It’s time to actually execute our freshly designed test cases, return the user-story if not implemented according to expectation or adapt our test cases and move the story to done.

At that time, there is no time to also automate heavy UI based tests, because most of the stuff really works only towards the end of the sprint and UI may still change. We may however already start automating some tests on an API level.

And how do we decide what to automate? Usually, we try to automate as much as possible, but more likely we automate stuff that is boring and expensive to execute manually. Often, we also automate because it is easy to automate. And, we automate stuff of which we know, a developer is likely to miss in his unit-tests. So, it’s better if we catch it on our side. We don’t spend a single minute on things we know are too expensive to automate. These things are better left to the manual UI test.

2 Likes