Test coverage for automated browser compatibility tests

Hi. I’m just starting on a project (a bespoke web based comprehensive business solution) and am trying to plan out the testing, specifically the O/S and browser compatibility testing. This is new to me and I’ve read a little about splitting requirements into tier 1, tier 2 etc and prioritising the testing - so 100% test coverage for tier 1 operating systems and browsers, and a reduced coverage for tier 2. Is this the norm? Presumably it isn’t feasible to provide 100% coverage across all permutations. I’d be really interested to hear what your experiences are in terms of coverage, and how you identify what does get tested for tier 2. Thank you in advance everyone.

Hi Dilly, welcome to the Club. The way I’ve gone about making decisions around OS/browser compatibility testing in the past has been to draw up a grid of possible combinations, eliminate the impossible combinations then group the remaining ones based on priority. I tended to do this on a spreadsheet and colour the cells for quick and easy reference but there are other ways. Apart from making the combinations most likely to be used priority 1 for testing, you may also be able to reduce the number of p1 combinations if you are comfortable with the assumption that the behaviour of a given browser is consistent regardless of which OS it’s running on, so you’d test each browser as a p1 test on the OS you think it’s most likely to be used on, then put tests for that browser on different OSes as p2.

Regarding your question about test coverage on tier 2, my approach would be to identify the core functionality and just test that - if you’re automating, then you could use tagging or suchlike to pick out a subset of your full test pack to run against the tier 2 combinations.

1 Like

To ensure that my web apps and native apps are compatible across devices and browsers, I will formulate a strategy that prioritizes the devices and then the browsers on which my product should be tested. One way to do so is to clearly list out the target segment for my product and where all my end users are located.

If I am planning to launch my product in India, then I will prioritize testing my product on iOS and I will try to cover a handful of Android devices. But if I am launching my product in India, then I will prioritize Androids over iPhones.

Then for browsers, I will test my product on which versions are supported by the devices I prioritized before.

In this way, one can ensure that they are testing on right devices.