Test everything during regression testing?

I have often heard that we should try to test everything during regression testing, even if majority of the tests are manual. Testing everything seems like an overkill. So, how do you a strike a balance between testing everything and testing only the most important things?

3 Likes

Testing everything is not possible as there are infinite combinations of tests that can be run. Therefore we test based on risk, whether it be via automation or manual/exploratory testing and prioritise what we can consider to be the high-risk areas and test there.

5 Likes

Karen Johnson made the heuristic RCRCRC for regression testing:

  • Recent
  • Core
  • Risky
  • Configuration
  • Repaired
  • Chronic

http://karennicolejohnson.com/wp-content/uploads/2012/11/KNJohnson-2012-heuristics-mnemonics.pdf

7 Likes

Good mnemonic! I will use it for sure, maybe I will change the ordering but it covers the stuff to be tested (if you have time for it :slight_smile: )

1 Like

The biggest differences between regression testing and testing a new feature I see is that I already explored the features, have experience with them.

I apply to both that they always relate to certain changes in the product. Therefore I relate my testing to this changes.
Even at regression I test “only” the important things, those who potentially relate to the changes. That might be a mass when e.g. a very basic library or framework was changed (so that the application still should look and behave the same, but works not totally different on code side).

“Test everything” is the lazy route to avoid to think about where the changes could have impact.
Also often people think it’s done easily while it’s not, as your question is confirms me.

I suggest you to ask why others see regression testing necessary, specifically what they fear that might broken and by what that might be caused.

TL;Dr on the very basic level regression testing is not different from other testing, it’s always about changes made to the product.

1 Like

By deciding what the important things are and what you have available to you.

The complexities of contextual information make this hard to answer in any one particular way. If I’m designing new software to control trains my regression testing will differ greatly from someone designing a mobile game. You need to assess the risks in your particular situation. Perhaps you have a legal requirement for something to be tested. Perhaps you rely on a function of the program to make money. Risk assessment is it’s own book, so I won’t go into detail.

You also need to know what resources you have available to do the testing. How much time, money, people is a good start, but you might need to consider available test data, access to test environments, available hardware, testability concerns like automation hooks and program logs, contact with users and clients, tester training, consultation on legal matters, bug report systems, protocols, policies, holiday time, code freezes, reports you must make and so on.

So then you know what limitations you have, the resources you have available, and what you want to achieve you can build that into a useful test plan, share it, and act on it.

1 Like

Teams can’t test everything during regression, nor should they.

I start by asking the business/product folks what constraints they have around time for testing, and what their highest risk/value concerns are. Overlaying all that with something like the wonderful Karen Johnson’s heuristic @han_toan_lim mentioned really helps narrow down what we should focus on.

Yes, laziness can be a reason to test everything when testing is automated. But, it can also be a symptom of not knowing where is the impact. I don’t know who can give us the best idea of where the changes could have an impact, especially when the changes are not simple. Another reason can be that some teams don’t have good testing practices at unit or higher levels. They might make changes that break other teams features and thus force those teams to test everything due to lack of trust. Moreover, lack of communication between teams can also worsen this lack of trust and lead to test everything. I wonder how a QA can really fix these team and development problems.

2 Likes

I think that QA should involve the product owner/experts in figuring out what is important and at least come up with some core or “must do” tests.

2 Likes

It’s hard, maybe even impossible, to fix this alone. Finally this is a business risk and a problem for the whole department/company.
You can initiate and maybe also guide the change, but at the very least the management has to decide.

You did an great analysis which is an important step and I suggest you to share it.
If you have issues with the management you can maybe ally with others. Talk 1:1 to team members to see who agrees and would support you.

Indeed, also potentially the users, clients, other dev team members, legal team, personnel managers, support staff, operations, all could have ideas that change the concept of what’s important. Deciding who the important people might be is probably a precursor to figuring out what the important things are, as “important” is an abstract value to a particular person. It’s a question that requires too much of an answer because the value of everything depends on a highly variable unknown context.

1 Like

It is not possible to test everything. To appoaches I use to address this problem are the Pareto Principle and Risk Analysis:

There is no clever answer for this from me… I personally think during a regression, as much as possible should be tested - how you achieve this is up to you.

Use automation, perform manual tests, mobbing sessions, whatever is at your disposal - and be able to justify what you do not test.

1 Like

The application could simply be old.

I’ve spent almost my entire career testing software with a long history, often software written in languages or frameworks that predated the ability to easily separate business and presentation logic. Classic ASP comes to mind (and is primarily where I test at the moment).

In my experience any software, no matter how cleanly designed it is to start with, eventually turns into a mess of spaghetti code that summons demonic entities from the Outer Dark. Or something. Time constraints, multiple developers each with their own way of handling problems (let’s face it, if you give a real programming task to a dozen developers, chances are you’ll get a dozen different ways to solve the problem - and all of them will be correct).

That doesn’t leave testers with too many choices when it comes to regression testing.

I prefer to use the 80/20 rule, with two main variants:

  • Test the 20% of the software that gets 80% of the use
  • Test the 20% of the software that gets 80% of the complaints.

The first variant will generally catch problems in the software’s essential functionality, where second variant will tend to catch problems in areas where users are watching for problems.

Unless you’ve got a really mature set of automation and a whole lot of developer unit tests (if it’s possible to create them for the language of the application), that’s generally as good as it gets.

As a rule, once that much is stable, it’s possible to start moving to cover everything that can be automated and discover what parts of the software shouldn’t be automated. As long as everyone knows it’s something that will never be finished and will take resources that could be doing something else, you can get to a high level of coverage that way.

4 Likes

This topic was automatically closed after 730 days. New replies are no longer allowed.