How frequent should I be doing browser compatibility or device testing?

When working within a CI/CD project, how often should QA be doing browser compatibility or device testing? Should it be each time there’s a release (as part of regression) or per feature or for each ticket?

Within my team at the moment we tend to focus on 2 most used browsers (Chrome and Firefox) for day to day testing (which was based on Google Analytics), however with lots of new features recently being developed there’s greater need for cross-browser testing.

Anyone have any tips on how we can approach this?

Also has anyone tried Ghostlab? If so, how was your experience using it and would it be a tool you’d recommend?


Hey @jr08 and welcome to MoT <3

I believe the answer is dependent on the context of your application & SDLC. It’s different for every project. I’ve had a project where we did do a browser compatibility check each release but that was because they only released twice a year. Is it important to check browser compatibility because you release often and a lot of X-browser bugs are showing up? Then hell yeah.

You should analyse your release process, features & tickets and see if it’s worth the ROI because how long does a full browser compatibility or device test last? I can imagine if you are releasing weekly or even faster, you don’t always want to do a full sweep.

I could be possible if you are creating a bigger feature that you want to do a sweep. If you see a regular ticket has changes made to the UI, then you maybe also want to do a X-browser/device test.

We currently support firefox/edge/chrome, we really only test on chrome and during the regression tests we tend to pick a different browser once in a while. (releasing each 2-3 weeks)


Very very good question, so, On the Desktop context of the question…
Firefox is really tiny these days , but need to also use some instrumentation and metrics though to back up such statement. For example where I am Safari is big news, but only because a larger proportion (still less than 10%, but are very vocal unfortunately) of our customers use macs - although in reality a lot of them get forced to use Chrome on the Mac. But yes, it’s not that clear cut. Right now my CI/CD line runs Chrome for 99% of the tests, but can run Edge and Firefox. My problem is that half of the tests only work on Chrome due to W/D inconsistencies and just plain amount of work needed to maintain portability which I never foresaw as being quite so much work. My coverage looks like this:

  • Windows 60%
  • Linux 30%
  • MacOS 10%
    And by browser across these
  • Chrome 90%,
  • Edge and Firefox 10%
  • Safari (issues)
    I run any manual smoke tests using Opera and Vivaldi just to mix it all up. If you have a Chinese market, they have their own browsers eating into the marketplace to look out for.

It’s really going to be a thing like @kristof points out, dependant on who your customer base is. My experience is that our mobiles market balance does not look like the ones other people publish on device use of mobile for example. I’m keen to see what happens when we add metrics into the browser end of the product, you need anonymised metrics for these decisions. But word of warning, metrics gathering lands you in a load of pain because the data is often unmanageable and not terribly useful to the business.


I haven’t used Ghostlab, but I have used Browserstack before, I don’t what is the budget in your company but if you can convince your managers to couch up the dough it will save you time and make cross-browser testing easier. Another option is Saucelabs, but bear in mind that it’s also a commercial service.

There are opensource alternative but those might require more effort in setting up and maintaining:


There are two major factors when deciding this. The first is driven by the product, as in how likely is it that a change breaks some compatibility and how severe is it if you do. The first part you can get some indication of from previous compatibility tests to see how often they uncover something new. In our case this happen quite frequently, but they are almost always related to two types of changes, so we can safely dismiss most other types of changes. The second part is a business / business intelligence type of question. If you are making a licensed product you typically can specify “compatible with this browser” etc, and ignoring the problem all together. Or if only a tiny part of your customer base uses a specific combination you can take the business decision to save money by living with the risks etc.

The second factor is browsers. We for instance rely heavily on ios and updates in ios and or updates in browsers typically impact the product so we need to be sure to continuously run these types of tests for every beta release etc, which means that we will naturally test compatibility very frequently which basically makes the first factor somewhat irrelevant.

As for everything testing it is a trade off between money invested in testing versus risk of losing money by not doing it. When you invest to little you will lose money, and if you invest to much you will lose money. Which means that these kind of questions will always depend on your specific business.

Alternatively if you cannot do this kinds of analysis you can instead look at it as you have x amount of time and you should spend that as wisely as possible. And then prioritize an area or activity from the angle of most bang for the bucks. If you rarely find compatibility issues, but very often find data related problems, you would benefit from spending more time on the latter etc.


As with everything this is context specific, but a good assessment here is putting something around Browser compatibility in the acceptance criteria for each story. Even if it means a discussion and a decision not to run any browser compatibility but atleast it is given consideration.

I would also agree with Browserstack as mentioned by @mirza. We run tests now against the top 5 browsers on desktop and mobile for each 2 week sprint.


Of course… It depends.

But I like to recommend a core set of key workflows that are tested on every supported browser ideally on each build but at least nightly. These are just your priority regression tests. I don’t recommend running all UI or e2e tests on more than one browser.

I’d then combine that with RUM data, analytics and users reports.

Depending on your market and deployment strategy. You might then not do any manual browser compatibility testing after the initial feature release. Legacy browsers versions aren’t changing after all and neither should your the stable features. So you only need to manually check new stuff and new browser versions.

P.s. the two providers I’m familiar with are saucelabs and browserstack


How well do these platforms handle clipboard testing? Just incidentally Matt.

They just provide the browsers for selenium, so work the same as any other selenium automation.

I mean, like can you verify that copy-paste really work, like as if a human was standing there, and that app permission prompts for these things do what it says on the tin, when the browser is in fact not on a computer you control the clipboard on at all so have no clipboard monitoring/hooking they give you to check that injection works?

So for most browsers yeah they are just VM’s and behave the same as locally. There is an option for headless testing and to be honest I’m not certain about copy and paste. And keep forgetting to check. Not sure why you need so that as part of an application test but you could always stick to a normal windows or Mac VM if you need to. They have that option.

1 Like

I’m considering a cloud hosted test env, but have so little time for experiments - it also requires I split out and tag all tests that are most useful to run in a device/browser farm, versus those that are less useful to run there. Too many excuses.

The clipboard question comes from the use case when giving a user a long code they need, and being sure that when NOT giving it in a form of url or a browser-then-launches-the app integration, like a Zoom link does, that the codes do copy-paste for example. This only affects those specific tests though.