Hi!
Would just like to know how everyone is tracking functionalities in their systems on what’s being changed or might have been impacted by recent dev changes? Are you using any tool or process?
This is so that we can select the most useful test cases when doing regression testing as well as to be able to capture bugs (if there’s any) before they get into Prod!
In most of my experience it hasnt been by any sort of tooling or monitoring process. It has always been done as a part of developer hand off on a per-story basis. And via PR/Branch code review to understand what services/features are impacted and how. this was done in order to “right size” the testing prior to merge
@anonymous4 We implement a combination of version control systems, like Git, and utilize test management tools for tracking changes and selecting relevant test cases during regression testing. This helps us ensure thorough validation and early bug detection before deploying changes to production.
Wondering if anyone still has a tendency towards the “indiscriminate” regression testing (i.e. full RT, not really thinking specifically about the impact of a change) Vs only doing regression on the areas expected to be impacted by change.
(Note this might be more applicable to those who might not have any(many) automated tests)
In my previous org, we did both. A story branch had to be tested prior to merge with the next release branch. This would be targeted testing suited to the story and relevant regression cases. Then we did a broader regression once we were at sprint “cut-off” This was the entire automated set of suites as well as a team wide (devs and product as well as QA) manual regression in which everyone has a handful of cases assigned to them and the expectation is that tose cases would also be the foundation of more exploratory testing. Defects found during that last got triaged and either fixed or deferred to fix in the follwing sprint. Not a perfect process by any meaning of the word; but it served a couple purposes. It gave us poortunity as a whole team to examine the product. It got people to drop tools and focus on the product as a user for a period of time. Developers putting down the IDE, for example. I found that this did result in catching some last minute things; but also I found that I and my QA team were doing less repetitious explaining how things worked to developers when they had a story in a area of the product they hadnt accessed in a while.