In our current approach, when we pick tickets for automation, we check what can and cannot be automated. Anything that cannot be automated falls into the category of functional or exploratory testing. Along with that, features that heavily involve user experience also is considered under exploratory testing.
I would love to hear how others in this community maintain a balance between automation and exploratory testing.
You have requirements, these will be written as test cases. Should a test case be executed regularly (for whatever reasons) or testing manually is too tedious (eg. reactions of user roles to other user roles) it should get automated. Exploratory testing requires someone who knows the software very well and is more workflow oriented than testing single requirements imho. You could do that before major releases.
Overall, this sounds like a good approach to take.
I think I would add a few refinements to it though:
I wouldnโt look at just can/canโt be automated, but also how much of an ROI there is in automating it. You may or may not do that in your evaluation though.
Itโs useful to periodically re-evaluate your tests and see if anything needs to change. It can be useful to go back and see if a feature should get some extra attention, especially if it now interacts with something new in the application.
I agree that while automating stuff, analyzing ROI is very much important. We usually categorize based on what can be automated or not, but prioritizing ROI over that is more important to avoid any technical debt. Thank you all for sharing your thoughts.
The balance changes throughout a products lifecyle but that balance I find its worth considering the known and unknown broad risk coverage aspects, whether its a learning model stage or not and whether its coverage suits more towards machine or human strengths.
Your scripted coverage tends to focus more on known risks and in particular regression risk. For this coverage Iโd tend to lean it more towards automation where possible as it will often favor machine strengths like repeatable, big data and able to cover things that where machines are just better suited than people on their own to cover.
Note that automation should not be limited to scripted coverage or just regression risk even though that model is very common, think about what else would work better to machine strengths.
Your exploratory approach considers a very different model, the acceptance that a lot of things are unknown, that you have things of value about your product still to learn. This is also a good match for software development due to the prototyping model that it has where theres often something new or for the first time with what you are building.
Exploratory also favours human strengths, perhaps deeper investigative testing, learning experiments, broad tool usage and will often have a much broader risk coverage than basic automation does. It also finds strength in variation.
Where in the stack also has an impact but thatโs a slightly different balance question in its own right for both automation and exploratory. Developers for example will do a lot of exploratory testing at code and unit level even if they do not shout it at that its exploratory testing they are doing.
Its an interesting view that exploratory testing requires someone who knows the software very well.
I consider it a different way, exploratory testing is all about the acceptance of things you might not know or at least not know 100 percent. So if you have zero knowledge of the product, zero requirement documents that also makes it a good match for exploratory testing.
Think of an explorer on a new island, they do not know the island that well so they explore, its the same with product testing.
They do though in my view need to be fairly decent testers to avoid it being random, adhoc meandering around a product, good domain knowledge will also help with that.
Really suits broad risk investigation, do we have an indication of what the weather will be on the island, lets pack clothes for that risk, oh there will be mountains to explore and weโll need climbing gear to explore those.
On a product that could be for example we want to explore security risks so we maybe opt to use burp suite for example to explore that risk.
Knowing it very well as opposed to knowing very well there are things you do not know so you opt for an exploratory approach.
Either way very interesting different takes on it.
When the project or feature is fairly new , reserve lot of time for exploratory testing , covering maximum risks. As your application evolves, reassess which parts can be automated and which areas / features with lot of risks still require exploratory testing. Continuously evaluate and update your test strategy. Also these days i m leveraging LLM to assist me exploratory testing by identifying potential areas of interest or anomalies that might require human investigation. Using LLMs to write me charters which is the structured part of exploratory testing. Let me know if you would like to discuss on this more, happy to help
Exploratory + Scripted testing followed by automation.
Our teamโs culture calls for this approach since the requirements are bound to change due to time constraints, so its not possible to end up what we started with.
Therefore, for us its better to automate when the scope has been locked in place after testing and bug fixing.