Currently in our organization we are following the process like below.
For regression testing ( Around 200+ test cases) execution they have decided to give 5.0 hrs of time.
A. Including test cases execution on Testrail
B. Defect updating/Raising
C. Report placing
D. New functionality verification
As my team manager asked me to help on this to reduce the regression testing hours.
What are the suggestions from your end. How can i improve the testing process?
Could you identify your highest risks first at product, then release and workout a plan from their. You may well find ways to reduce the tests you run while increasing your confidence that your biggest risks are mitigated.
Figure out how much of the “test” time is spent documententing. Try to reduce that time.
Partial automation or scripts can reduce the test time a lot.
Figure out the risk of NOT running each test. If the risk is small, then the test may be omitted.
Figure out which tests are covered in other test cases (i.e. if you can’t do a check out without logging in first, then the log-in “Happy path” test is redundant)
Figure out which tests are close enough to other tests to combine them.
Figure out which tests are already covered by automation and/or unit testing. These can then be removed. (It is either more than you think or less than you expect)
For regular releases (i.e. weekly, monthly), only test high-risk areas, or those impacted by the changes. With the remaining time, spot check the rest of the functionality.
Now to what I would do, which may be totally inappropriate for your situation…
I would test the impact of not doing manual regression testing at all, or maybe just from a very short checklist. I would then use the extra 5 hours to run a few exploratory testing sessions for pre-existing functionality. The scripts and tests which you have already created may help with ideas about what and how to test during your sessions.
Compare the results of the three options. How is the product improved by “regression testing” plus overhead? How is the product improved by doing no testing for pre-existing functionality? How about using testing sessions instead of factory-testing? Which process has more overhead (planning, documentation, etc)? Which finds more issues? Which encourages communication with the team? Which provides more coverage (short term AND long term)? Which is less impacted by the pesticide paradox?
Personally, I really prefer session based testing. For me, it increases time spent testing, reduces time spent documenting and updating tests, and allows me to test differently every time, thus increasing coverage in the long term. The big down-side is that there is a chance that some functionality gets missed. So perhaps a combination of techniques?
I had used similar approach in the past when automation is not up to the level we need in regression checks.
Basically, team of testers talking to dev to understand potential impact areas. Run through happy path across product areas first. Then, followed by timeboxed in-depth exploratory testing sessions in identified high risks areas. Also, ask for help from others if you need more help with testing, get Dev, BA, PO, Support, Business to help out with the happy path checkings. Get testers focus on in-depth exploratory testing in the high risk areas.
Identify the areas touched by the changes in that release and any other areas that night have been affected. Selective regression is what I would suggest. Also, fly high, high level test. No negative, boundary, etc. Teating. Only happy path and only of the areas identified as explained above AND of those areas, only the very high risk show stoppers. You’ll need management to give you a priority too of the areas. Since they wont give you enough time, you need to know the highest priority areas. Then from those test only where changes were made and if you have time, the rest of the high priority items. That’s what we do, our lead programmer and the business owners sat down and prioritized each area or module 1-4 with 1 being the highest priority areas, ime. Show stoppers if it were to break, patient safety, or security issues if the areas were to break. In testrail what I do for our manual regression team is I created milestones for every release and then create a run for priority one test cases, another run for priority 2, etc. Also, I highly agree with the exploratory testing. Think end to end testing and run through it, you could record it with something like shared and do tour documentation at the end of afterwards, and then after session go mark the test cases in one swoop.
Introduce a kind of lifecycle management - each Regression test or test suite has a number of times it’s being tested. After that it gets reviewed and taken out if not important anymore/tested enough times without fails.