How do you handle regression testing in agile development environments with frequent code changes?
We base everything off what we think the impact of the code change could be and what level of risk it could have. So if its a high impact and a high risk then Iāll definitely do some regression/exploratory testing in that area. If its a low risk or impact then I may leave it to the automated tests.
What Adrian said.
This is exactly where Quality Focused resources come into play. People dedicated to identifying risk, gaps and measuring the quality of the application/feature/change.
Ideally most if not all of the regression is automated. Exploratory tests should be migrating into an automated suite very quickly.
In some of the better roles Ive had the process would go something like this:
Developers would be responsible for Unit Tests of a change they are making. Its required for PR review.
Developers and QA would work to add integration tests as a part of the development process. QA Engineers would do things like identify automated tests that should fail as a result of the change. Identify the scope of the suite of tests to be monitored during development process. And so forth
Because management had no desire to wait for all tests to be automated, often a change would head to production befire that was going to be accomplished. So exploratory tests were detailed and and stored in a test case management tool.
As the product iterated, QA or Dev resources could review both automated and manual suites that exist. Identify cases that dont. and have an understanding of tests relevant to the change. Some of those manual cases would be identified as ones to be automated during the work on the change (and would be something a QA engineer could begin immediately while the dev did the change work)
once the PR was about to be merged into the pipeline for production, there would be an expected pass rate for the automated tests and gaps filled by some (hopefully few) manual tests, UAT.
Hi @zolani,
I think Karen N. Johnsonās RCRCRC heuristic can apply in this context (and in other contexts too).
- Recent - what testing around new areas of code should I think about?
- Core - what essential functions or features must continue to work?
- Risky - what features or areas of code are inherently more risky?
- Configuration Sensitive - what code is dependent on environment settings?
- Repaired - what code has changed to address defects and potentially created issues?
- Chronic - what code typically breaks features that need to be tested?
I think automation an pipelines are key here.
Regression tests need to be automated to keep up with a rapid pace.
Every commit should trigger the execution of this test. If a test fails, itās the responsibility of the the committee to get it fixed: either on their own, or with the help of a tester.
@zolani In agile environment with frequent code changes, regression testing is managed through automation, integration with CI/CD pipelines and use of feature flags for controlled deployments.
Testing prioritizes critical functionalities, collaboration b/w teams ensures comprehensive coverage and timely feedback, maintaining product quality amidst rapid development cycles.
Hi, letās, for example, focus of web tests⦠we identify 3 test that would be nice to automate on a new feature. Since it is a new feature you have to code to automate tests, which will consume time⦠Since in agile work environment we have the pressure to always deliver new software faster, do you think we should automate the test before the feature goes to PRD or after? does anything in my line of thinking that you do not agree?
I think it depends whether firstly the testing of that new feature is crucial and leaves a high risk in place if not performed. In regards to whether to have the automated test complete before or after the feature reaches production might depend on how much writing the test slows down the release process, if itās quicker to manually test it before it hits prod and then write the automation after then that might make sense in some scenarios. In my experience creating the automation is slower than just manually testing the feature, so the automation comes after.
Like other mentioned here already regression testing should be automated. As you canāt automate everything it can make sense to include some manual regression testing in tests for new features or bigger bug fixes to ensure the code changes didnāt damage existing functionality.
Absolutely exploratory testing of new features is fast. But there is a risk to āautomate laterā that ive experienced in which when that automation occurs is pushed off in favor of new testing and becomes what I call ātest debtā that is never clawed back as QA resources are always constrained in comparison to dev resources combined with the desire for āmore newer stuff right now!ā So its a tightrope to be walked with skill and care.
I have also found a use for a suite of manual tests used as a part of regression. When we worked in sprints we would take a day at the end in which everyone on the team was assigned a set of manual tests. and I mean everyone. Not a huge number of tests. More like a small handful of exploratory activities.
This served a few benefits:
It got some human eyes on the release build
It got developers to put down the IDE and come up for air for a moment, interacting with the product as a user.
This, in turn, reduced the cycle of developers ārememberingā business rules and asking me(and other QA or Product) to remind them how a feature they havent looked at in a while is supposed to work. (because documentation, like QA is always behind)
Now, this works in a sprint cycle approach. I dont really see how it would work in a kanban ci/cd organization. But I think its worth noting the benefits and maybe finding a way to achive some of those benefits in other ways.
This needs to be automated, otherwise itās way too much work to test everything manually. I use Disto, itās an AI-based no-code tool where you can write your tests in plain language (e.g. āSearch for a product, add it to cart, verify the item is in the cartā). I like it because itās really quick to create the tests, and I donāt have to change them upon every code change because the AI automatically adapts to new UI.
We ship every two weeks and try to balance it alternatively with a minor and a major release. For minor releases we use the automation tests suite followed by a smoke test to gain confidence on our introduced changes. For our major releases we do a full regression run.
For any new feature that is ready to ship we always use a feature flag to ensure we have a safety net to turn a feature off just in case if things donāt go as planned and there might be a critical bug that has been introduced into production that has not been detected in our feature or regression testing.
For any feature that is still WIP we use a feature branch so that it doesnāt expose any bugs that might arise due to its unstoppable state.
We also do a dev/qa rotation on our regression pack so that we are always testing things with fresh eyes and so that we donāt get used to slow blindness which can often happen when you are testing the same thing over and over again.
I wonder, @masha, though if, when āsomethingā else is doing checks, how often we all miss something. How does one know that it a no-code tool is still testing that non-functional things like exact wording and language are correct when a feature forks a workflow? Iām only asking this because yesterday we had to re-spin a release due to a text title being really quite wrong after we added a change to a workflow wizard. AI is notorious for ignoring quality aspects of any input data where it has no training. If a new feature Adds a extra screen to the workflow but that new screen is sometimes not showing the correct input-field description text (normally to the āleftā of the field,) how will the AI know that it was not copy-pasted from another screen?
In general, agile is about closing the loop early and often, which really means to me, get early drops in front of stakeholders, automate later. Automation-first of all things like TDD process preaching can easily start religious type scale wars. Automation is sadly never the main thing, so blocking a release just to automate is a church I might go to sometimes, to learn other things, but will never be a member of. There will always be other ways to speed up releases that involve all players. Itās a whole team effort.
@msh agree come up for air often.
Iāve far too often had new features start undergoing automation coverage only to discover days before release that difficulties automating the feature were pointing to poor feature UX. Automation too early , can cost more because if the automation of a tricky new workflow feature does start passing, then that broken workflow starts to gather inertia. Eyeballs are more valuable.
I think maybe its another good opportunity to use the āMust Do, Should Do, Could Do, Wont Doā idea. The things we must automate, should automate, etc