There are two major factors when deciding this. The first is driven by the product, as in how likely is it that a change breaks some compatibility and how severe is it if you do. The first part you can get some indication of from previous compatibility tests to see how often they uncover something new. In our case this happen quite frequently, but they are almost always related to two types of changes, so we can safely dismiss most other types of changes. The second part is a business / business intelligence type of question. If you are making a licensed product you typically can specify “compatible with this browser” etc, and ignoring the problem all together. Or if only a tiny part of your customer base uses a specific combination you can take the business decision to save money by living with the risks etc.
The second factor is browsers. We for instance rely heavily on ios and updates in ios and or updates in browsers typically impact the product so we need to be sure to continuously run these types of tests for every beta release etc, which means that we will naturally test compatibility very frequently which basically makes the first factor somewhat irrelevant.
As for everything testing it is a trade off between money invested in testing versus risk of losing money by not doing it. When you invest to little you will lose money, and if you invest to much you will lose money. Which means that these kind of questions will always depend on your specific business.
Alternatively if you cannot do this kinds of analysis you can instead look at it as you have x amount of time and you should spend that as wisely as possible. And then prioritize an area or activity from the angle of most bang for the bucks. If you rarely find compatibility issues, but very often find data related problems, you would benefit from spending more time on the latter etc.