Successful integration of Exploratory Testing within Regression

Any tips or hurdles from those of you that have successfully implemented exploratory testing during your regression testing?

I am creating a regression suite of test cases and am curious if including some exploratory testing along with the test cases is practical. I dont foresee completely getting away from test cases during regression but have found that I uncover the majority of bugs doing exploratory testing during the regression cycle.

Has anyone found a good way to accomplish this?

I am reading your post as a few different questions so I am uncertain what you actually are asking so I will go ahead and try to answer the ones I see.

If the question is how do you handle regression testing in exploratory testing, the main challenge I have faced there is the question when have we introduced so much residue uncertainty that we need to do a general testing of the product. What I mean with residue uncertainty is that when something is changed and you test the change and all the things that may have been impacted by that change with a high enough probability and high enough severity if it did (typically called risk), there is always some parts which will be not tested. That’s the residue. If you do that enough times it might warrant a general test of existing areas to spend time testing them even if any of the changes should not have impacted those areas. At one place we managed that with a method that we called Heat Maps. In short for every change / version of the product we mapped out what areas have been impacted, adding heat, then we looked at what areas we have tested and removed the heat in those areas. And everything that we, due to our test strategy did not cover was remaining on the map. After a while those would build up enough to warrant testing those parts too.

If your question is how to feedback what changes have been made to decide what kind of tests you want to perform as a result of those changes. I do not have a specific tool for that, but we basically sit down and do an analysis of changes / areas in which we have had a lot of issues recently and from there decided the scope of the testing. Using a spreadsheet basically.

If you question is this. Our strategy is to test certain areas every time and always no matter the changes in the product and find that to be lacking in return on investment and find that we get more value out of doing testing that reacts to the changes and the findings you have during the testing. I would suggest that you update your test strategy to be a little more efficient.

If your problem is that “God have decreed that I need to report test cases”, i.e. you do not want to challenge that part then my suggestion is to do exploratory testing and retrofit the test cases. After you have done the testing, update the ones that needs to be updated and create the ones that needs to be created to be in line with what you actually tested. This ensures you to have the least amount of waste.

Finally in my experience the key in the transition between the waterfall approach with a lot of testing late, i.e. very long feedback loop between the change and the impact. Towards a more early testing, a.k.a. get it while it’s hot. Is shorten the feed back loop. This can typically be done with different kinds of automation, mainly talked about in the context of continuous delivery. That is make sure to always build the product automatically. Make sure to be able to setup a test environment fast and effortless. Have automatic tools that can load test data in the environment. Automate the deployment of the product to the environment and have a place where you can track and see what version is available where. Integrate the test reporting where everyone can see the outcome of testing. Like a team dashboard or something. All with the aim to minimize the time between code committed to testing performed and reported to the committer. If you get this cycle to be short enough, let’s say 2 hours. Then I have found that regression testing as a concept is no more.

Good luck!

2 Likes

Any tips or hurdles from those of you that have successfully implemented exploratory testing during your regression testing?
Exploration is an approach. Regression testing is a technique.
You usually do regression testing after something changed in the product - internally or externally.
One way, at least to start regression testing is with exploration.

I’d explore: what changed? how can I find that out? do I understand the changes? can someone guide me through them a bit? do I understand the context of changes - why were they made, what did business want, how did the developers understand it?, where were they made? by whom? can I see the changes somehow - code, documents, e-mails, sketches, tools, interfaces? do the changed things have a history(anyone changed something often there, bugs were easily introduced in some area, complicated code, constant complaints with the feature, etc…), is the code of the change re-used somewhere, or maybe it isn’t but should be? is the change made in one place but should be in more similar places? how can I setup the environment/s to have a first look at the change? etc.

Once you start to understand some of these you might continue to explore. At some point exploration will decrease. You’ll start seeing some risks and performing some experiments.

Some of those experiments you might want them to be new. For some you might apply some test-idea you’ve already done before.

There’s no one right answer. It depends on context first, then the testers capabilities, system understanding, knowledge of the business and application, history of the product, people and people relations, technical understanding, exploration and critical/lateral thinking, a bit of skepticism to what he’s told…etc
The less ‘aware’ you are of things the more you should step up and start doing explorations and deep testing. Otherwise there’s a large chance you’re just confirming with your cases what you know and bugs are easily introduced without anyone noticing or ever looking for them.