Iāve read a bit into cross browser testing but Iāve never had the chance to put that into practice until now (Iām finally working on a live project where itās something we have to worry about yey!).
Iām looking into using webdriverio which, it seems from documentation, you can do cross browser testing pretty easily (and potentially for free) with. I say easily with a pinch of salt, Iām sure it comes with its own headaches!
Obviously the first place to start would be to ask your team: how many browsers and versions are we going to support and why?
After that question what would you advise?
Jumping ahead to the tools one can use to do this then, does anyone have any experience reports from these? Maybe youāve used a few and you could advise what are the pros and cons in your experience?
I have started down the road of different device/ browser testing. I would first look at your ga results to see what you must test, then the whole of the ukās to see what you can try and test.
Then the first thing I noticed is alot of companies use emulators rather than real devices, so youāll have to decide whether you want to trust emulators, but with your budget it sounds like you wonāt have much choice.
If its just browsers testing, you can use grid.
Once youāve got it down to a few providers, try them out. Theyāll usually give you some minutes to try and get it working with your framework.
I know this as Iāve tried out 2 companies and it uncovered alot of flaws. So for now we have gone back and begun to look at other competitors. Good luck.
We know weāll have mobile users so thatās definitely on the radar for me. Emulators are a gamble but potentially cheaper than a device lab on site. Have you found any major issues using them?
Iām only on week two of a new job so rooting out all the analytics is next on my to do list. Lotās of research at the minute about what our options are I think if I can provide justification for a budget then Iāll get it but often in my experience you have to use free options as the justification
for manual testing purposes websites like, browserstack.com can be quite helpful, as they use real devices, you get streamed to your desktop, and not emulated devices (I think they can even work with Selenium tests, but the reliability is still questionable in my experience). For Emulated devices you can even use Chrome in Dev Mode switched to have it rendering the viewport like it was a mobile device. But for the real deal, it can be misleading in its result.
I have no experiences with SauceLabs etc. (Cloud-Services, that run you automated tests) but always feared, that it wont work flawless, like Mark in this thread pointed out. With the customers we work with at the moment, we can not properly do it anyways due to technical (VPN) restrictions and legal restrictions.
Therefor we are slowly starting to use Selenium grid with our Webdriver tests and are building our own framework to initialize crossbrowser tests on our own hardware. We are aiming to do it with things like Docker (Virtualization), to get a good scaleability. But I can assure you from my point of view, it is quite a challenge to tackle, to get it done right.
And one thing; There are no free options Either the tools are free and you need more work from engineers to have it suit your needs. Or you pay the big players tons of money and try to get away without as much infrastructural hassle as possible. Both strategies do cost money anyways. So it is more a strategic decision, what your mid- and longterm goals are. On what kind of āSoftware under Testā do you work with your infrastructure mainly, like angular JS heavy, than Protractor and jasmin is on the table. Or is it even beyond SAAS and you need a solution like TestOffice, that can automate pretty much anything.
From the post I assume you are asking about cross-browser test automation.
First of all as @stefan.spill mentioned it will be expensive no matter which road you decide to take.
We decided to build our own infrastructure for one main reason, the Cloud-services were taking at least twice the time than the local selenium server and since you pay the testing minutes, it was getting way out of budget
Regarding the browser and version support, the most realistic is to stick to a specific browser and selenium version (since driver versions are bound to specific selenium versions). Furthermore not all browser drivers are in a good condition.
As for the tools, we tried webdriverio and nightwatchjs, both are similar but nightwatchjs suited our needs better.
After you get the teamās feedback the next step I guess, if you go for in-house solution, is to explore some tools and build a small proof of concept.
Looking at browser stack I think something like that would be my first choice. Iāve tried the free trial of it and it seemed OK. I had some lag issues with it but Iām not sure if that was a free account restriction or something else.
Currently we have a small in house device lab by pooling our office devices. We know this isnāt a long term solution as it takes a lot of man hours but itās proving the need for a different solution as we find device/OS specific bugs.
Crossbrowser testing is never easy or cheap, however in my experience it is necessary as i always find a lot of bugs when running my cb tests.
I like to use Ranorex. This allows you to create just 1 script and run it on the most common browsers (Chrome, Firefox, IE, Edge, Chromium) automatically. For testing other less common used browsers or mobile devices i use browserstack automate Pro which i have connected to Ranorex via the standard webdriver endpoint functionality. This allows me to use the same script for mobile testing as well saving much time and money on maintenance.
On browserstack you can decide to use a plan with emulators or real devices, depending on your needs.
Both Ranorex and Browserstack are paid services, however you save a lot of time (and money) for not having to create something yourself and keep it up-to-date with new devices and browsers.
We still manually run our cross browser tests, we have a fairly succinct supported list, and access to mobiles and to VMās running older versions of IE.
I recommend checking what your developers use to develop against. My guess is none of them will use IE, ours donāt so we always start testing in IE
IE and Edge are so far proving to be a bit of a headache. It seems take everything other browsers (Chrome, Safari, Firefox) do and they donāt do that. General statement a bit there but those two browsers are big sticking points so yes the ones I go to first when Iām checking. I cringe a bit every time I open them but itās my job because obviously people out there are using them.
Cross Browser testing is very important aspect of testing the web applications. Cross browser testing is also called as āBrowser Compatibilityā testing. As the name suggests, it is basically a process where the user can test web application in different multiple browsers. Also, we need to test if the web application is being displayed correctly across browsers, JavaScript, AJAX and authentication is working fine. User basically do Cross Browser testing to ensure that the application behavior should be same in all the browsers.
While performing the browser compatibility test, user may also need to evaluate whether the display inconsistencies are acceptable or not. Sometimes, we may also have to check for āMobile Browser Compatibilityā because there are some software testing companies who also deals with Mobile testing services.
Why we perform cross browser testing?
Cross browser testing is a necessary part of the testing process these days as it ensures all the Clients/Users should have a consistent experience using the product. Also, as we all know that browser is constantly updating and the new versions of each browser are being released on a regular basis. So, there are possible chances that may be the behavior of application get changed after new update.
Hope this information is clear and you can get back to us in case need more information.
One thing Iād suggest is worthwhile doing to inform where you focus your effort is to try and get some data on what platforms the end users access the app under test on, or if itās not gone live yet, on what theyāre currently using. If you can get some trend data as well as a current snapshot thatās even better. Donāt just rely on the general internet usage stats as they vary quite a bit, and depending on who your end users are and what they use it for you may find the usage profile is pretty different - you may find you have to start looking at some of the old and mouldering versions of IE, regardless of whatās still in MS support!
@heather_reid have you heard of a cross browser tool called Rainforest? It has recently come onto my radar from a manager at work who is being squeezed by sales people and I havent seen it listed here as tool under resources - so I just wanted to see if anyone had heard of it, had used it or had any experinece with the tool?
Thanks
Sarah
I am still a very Junior in Software Testing and this topic is what I am curious about now so would like to ask for your experience for Multi Browser Testing?
Am more curious about procedure and the best way of doing itā¦
Have looked few cross-browsing testing tool (SourceLab, BrowserStack, CrossBrowserTesting) but am curious to hear more real life experience as well.
I saw that those tools have integration with Selenium, but from my small experience all functions are working on all browsers where selenium tests are a bit uselessā¦
To me it seems that the biggest difference from browser to browser is aesthetics - I have seen that those tools can make ScreenShots of Web App across Browsers, but how that goes when App has 300+ screens?
Going through 1500+ screens seems to me like a longer and harder job to do then to go manually in each Browser?
Reading this Replies have found out about Ranorex as well which seems nice, but seems a bit too pricey compared to other Tools that are more āfamousāā¦
As you can see I am inexperienced about it so any feedback is very useful!
Thank you very much for your answers!
Rainforest was in use at my current company prior to my starting in lieu of having an in house tester.
I think it has some interesting possibilities and common drawbacks.
Fundamentally itās crowdsourced scripted test case runs. You provide the test scripts and the browser targets and they provide the testers. Because itās distributed you can get a lot of coverage fairly quickly across multiple browsers. They also did multiple testers per test to try and limit false positives.
Butā¦ itās scripted test cases run by people who have no knowledge of your product (likely) or industry (probable) with unverifiable credentials or skills.
We were on a very low end tier and could have used up our credits incredibly quickly. On top of that it meant having a full time person to manage the existing test suite, create new tests, manage environments, and manage test runs. So, I could have the expense of that full time employee, plus the expense of our Rainforest package (more than it costs for a good tester where I am), for something we could run the equivalent of a couple days a month.
They also have an exploratory testing package they tried to sell me on. Thatās much more interesting to me, to have fresh eyes on a feature or functionality, but I couldnāt justify the price. Have Rainforest do a day or two of exploratory testing a month for me or hire a full time person and have tooling / training budget left overā¦ Easy call for me.
They worked hard to retain me, but when I told them I didnāt have the capacity or desire to continue using them they did terminate our contract early. That alone means Iād consider them in the future, given the right circumstance.
For my own cross-browser/device testing (which makes up a large part of my role), I have as many real browsers/devices as possible on my machine or in our device lab, I use modern.ieās VMs for testing old versions of IE, and I use BrowserStack for quick checks and for browsers/devices I donāt have in front of me.
Generally I find that BrowserStack is pretty accurate (especially their āreal devicesā which are supposedly not emulators), but actual browsers/devices (or local VMs, if needs be) are always preferable for in-depth testing because thereās no lag and theyāre much more pleasant to use than virtual ones. Also, testing on virtual mobile devices can often mask problems (e.g. interfaces that are difficult to use with a touchscreen compared to a mouse), or create problems that donāt exist in the real world (e.g. mobile-friendly interfaces that are designed to work well with touch but not really optimised for mouse).
I prefer BrowserStack to other solutions like SauceLabs because a) the UI is a bit slicker and well-optimised for āmanualā testing and b) I donāt really do automated UI testing so I donāt need all of the automation bells and whistles that other browser testing services focus on. I know BrowserStack has its own Selenium testing platform, but that costs extra and I donāt currently have any need of it.