Iām on the same page but I prefer to think that Iām being āsmart & optimalā rather than ācynicā
There will always be weird issues with specific configurations (OS + browser + device + etc) but on pre-reIease I prefer to focus our testing efforts on the configurations most used by our customers (i.e Windows10 + Chrome) and after-release make sure monitoring + observability is good enough so we can detect issues in production environment (i.e mobile app crashing in certain low-cost phone).
What to do with all this time saved by skipping testing a broader set of configurations? Couple of ideas:
Improve your CI/CD process so potential bug fixes can land in your customersā hands faster.
Improve monitoring and address any abnormalities detected in Prod.
@pasku_lh Thanks for the reply
I used the term ācynicā as perhaps in the past I feel thereās been a lot of emphasis on device coverage and Iāve always wondered if this really was required. E.g. any of the team reports a bug against an Samsung S7 for example, and Iām immediately questioning the relevance of S7, whether it is on other devices etc. i.e. where does the defect manifest itself. I like and wholly agree with your thoughts. I also have an additional benefit in current workplace as itās principally B2B so we can always recommend specific configurations (or officially āSupportā (i.e. itās been tested on) certain devices/web apps) which certainly alleviates the burden on testing a bit.
Depends on what is a defect. Iāve worked on some applications which are fairly simple and have a limited user experience. I have also worked on applications which use an adaptive design. They change how they look and how they use space based on the resolution of the display.
So if a test failed on a Samsung S7, it wasnāt the fact it was a Samsung S7. It was more than the resolution was 2560 x 1440. We might have a few devices which are 2560 x 1440 and they would all be considered equivalent.
I have seen things where the operating system plays a significant factor. An Android device versions an iPhone can be significantly different. So Iād want a mix of different devices and versions of OS to test again. When I first started testing mobile devices there was a HUGE number of variations. Version 1 to version 2 of an OS could have significant impact. Android vs. iPhone made a huge difference. Weād look at what devices were accessing the company and prioritize them as needed. In some cases it was just easier to write a different code base for each OS.
Today things seem to be stabilizing. It was just like when the Internet first started. The difference between browsers is very significant. At some point it just became unrealistic to support everything. Back then, you supported Internet Explorer but noted that 5.0, 5.5, 6.0, etc. were significantly different. Now IE Edge is using Chrome rendering engine. So which browser you use is far less significant.
Heck, even back in the 80s, which PC computers you supported was critical. The list of clone computers was significant. When 64 bit computers came out, manufacturers which used AMD processors was more important that Intel processors because more people owned computers with AMD processors. Intel and IBM was trying to kill the clone market. So all clone manufacturers had to make enough change to their system to not get sued by IBM. The difference where significant enough that I worked at companies which published a list of clones they would support. Now a PC is a PC.
I think, now, smart phones have stabilized enough that different hardware and OS are much less of a concern. Screen resolution still seems to be in flux.
Thanks @darrell.grainger - very in depth reply. I agree with all of this. For me, the key risk of mobile displays at this moment (for mobile web sites at least) is screen resolution. So rather than testing ādevicesā (literally) weāre testing screen resolutions and orientations (portrait/landscape).
For web apps, Iām not thinking about versions of OS etc, just the browser app being used.
I also take an approach that we use chromeās emulator and then switch to some physical devices (initially 1 sample android phone/tab and Apple phone/tab) towards end of development (assumption is that we will find minimal number of bugs on a physical (in terms of UI and functionality) that canāt be seen on the emulator).
If we feel some further device coverage is required then I may dip into Browserstack for a month.
As per my last experience issues or bugs are different as per the devices ,OS,Bowers,network so you have to focus on that devices only which is mostly used by your customers. It is also not that much time consumingā¦
See this is where I think Iām a bit of a pedant - in my brain this isnāt ādeviceā testing but rather OS. As an example, would you find a bug specifcally on Samsung S9 Vs Huawei P30. No (I doubt) - each device is (likely) to be an equivalent representation of a configuration of screen size, OS. So to get back to the original point and perhaps being a pedant actual device testing probably isnt required, but more so testing a coverage of other variables such as screen size, OS, browser app. Etc
Bit late to the party on this one but yes I find issues present on one device but not on others. This mostly down to OS admittedly (e.g. Android 9 vs 10) but also due to other factors; screen resolution has been one at my work recently. Either things not fitting on the screen properly, accessibility settings not producing the same results or images being cropped differently and so looking blurry. Also found out the face ID doesnāt work on apps on most phones other than Pixel 4 because the methods used by most other phones donāt meet Androidās security standard.
I tend to check:
Flagship phone vs cheap phone
Big screen vs small screen
Latest OS vs older OS
Iām in Mobile technologies since 2008, so I had quite a few opportunities to find particular bugs
And yes, in every project that Iāve worked on Iām always finding bugs that occur on specific devices or one device only.
They are not always critical, and they are sometimes not even worth fixing. But it always depends on the project focus. Youāve asked about bugs in general, so Iām answering about bugs in general
Simplest examples of devices that are causing a lot of troubles:
Samsung Galaxy S3 mini
Samsung Galaxy S4 mini
Samsung Galaxy Grand
iPhone XR
iPhone SE
iPhone XS Max
Nvidia Shield K1
Infinix Zero 5 Pro
And other devices on Mediatek MT6757CD Helio P20
Android GO devices
Android devices with API lower than 19
Devices with custom ROMs
Devices with custom manufacturers UIās such as Xperia UI