I enjoy combinatorial testing and for the most part I tend to cover it naturally in an exploratory way unless its going into an automation script where the more structured wise elements can be useful.
I do tend to find from experience that some of the theory behind it still holds water.
Its worth tracking what was the actual code fix to your historic issues that have been found, was it a single code fix at say a unit level or was it a complex fix of five or six variables at the same time.
In most cases I’ve found its often a single line of code fix which leads to the idea of singles testing should be very common, the next most common type of fix is the interaction between two things hence pairwise being common.
I don’t have the numbers but lets say that is pretty high coverage maybe ballpark 80 percent of fixes for issues found. There are papers on this with more statistical views.
On the otherside of the argument is that the more variables and combinations there are the more complex things become so whilst rarer that root cause is 9 variable interaction it is still a risk.
Low to medium risk products, singles, pairwise and at least some combinations beyond that are usually sufficient, maybe above 95% of combinatorial issues covered.
So if it was 9 variables with multiple options for each you can easily hit hundreds of thousands of combinations but often picking say twenty 9 variable combinations will catch 9 variable issues.
It remains a risk issue alongside efficient test coverage, if its a medical tool and people could die then you may go full on full variable coverage and its going straight into an automated script as a tool strength high data coverage technique.
Free entry variables complicate things but usually change those tests to the single variable coverage.
As a side note I also find using the idea of was it a single line code fix to decide where in the stack that risk should be covered, again using the same theory “most” issues can be caught at unit level with only rarer issues further up the stack.