Mobile testing risk

All,

I must admit I’ve always been somewhat of a mobile testing ‘cynic’ in that I do question a need to test on a huge array of devices.

Wondering if anyone has ever found a bug that only appeared on 1 (or very small number of) specific device (and if so, what was the cause?),

Same question for mobile Web apps - I generally just go for 1-2 devices for the 4 main classes of device (android/apple tab/phone) to begin with.

I’m on the same page but I prefer to think that I’m being “smart & optimal” rather than “cynic” :stuck_out_tongue:

There will always be weird issues with specific configurations (OS + browser + device + etc) but on pre-reIease I prefer to focus our testing efforts on the configurations most used by our customers (i.e Windows10 + Chrome) and after-release make sure monitoring + observability is good enough so we can detect issues in production environment (i.e mobile app crashing in certain low-cost phone).

What to do with all this time saved by skipping testing a broader set of configurations? Couple of ideas:

  • Improve your CI/CD process so potential bug fixes can land in your customers’ hands faster.
  • Improve monitoring and address any abnormalities detected in Prod.
1 Like

Hi) As for me, I haven’t. Bugs maybe different but no such thing that it appears only on one device.

@pasku_lh Thanks for the reply :+1::+1:
I used the term “cynic” as perhaps in the past I feel there’s been a lot of emphasis on device coverage and I’ve always wondered if this really was required. E.g. any of the team reports a bug against an Samsung S7 for example, and I’m immediately questioning the relevance of S7, whether it is on other devices etc. i.e. where does the defect manifest itself. I like and wholly agree with your thoughts. I also have an additional benefit in current workplace as it’s principally B2B so we can always recommend specific configurations (or officially ‘Support’ (i.e. it’s been tested on) certain devices/web apps) which certainly alleviates the burden on testing a bit.

Depends on what is a defect. I’ve worked on some applications which are fairly simple and have a limited user experience. I have also worked on applications which use an adaptive design. They change how they look and how they use space based on the resolution of the display.

So if a test failed on a Samsung S7, it wasn’t the fact it was a Samsung S7. It was more than the resolution was 2560 x 1440. We might have a few devices which are 2560 x 1440 and they would all be considered equivalent.

I have seen things where the operating system plays a significant factor. An Android device versions an iPhone can be significantly different. So I’d want a mix of different devices and versions of OS to test again. When I first started testing mobile devices there was a HUGE number of variations. Version 1 to version 2 of an OS could have significant impact. Android vs. iPhone made a huge difference. We’d look at what devices were accessing the company and prioritize them as needed. In some cases it was just easier to write a different code base for each OS.

Today things seem to be stabilizing. It was just like when the Internet first started. The difference between browsers is very significant. At some point it just became unrealistic to support everything. Back then, you supported Internet Explorer but noted that 5.0, 5.5, 6.0, etc. were significantly different. Now IE Edge is using Chrome rendering engine. So which browser you use is far less significant.

Heck, even back in the 80s, which PC computers you supported was critical. The list of clone computers was significant. When 64 bit computers came out, manufacturers which used AMD processors was more important that Intel processors because more people owned computers with AMD processors. Intel and IBM was trying to kill the clone market. So all clone manufacturers had to make enough change to their system to not get sued by IBM. The difference where significant enough that I worked at companies which published a list of clones they would support. Now a PC is a PC.

I think, now, smart phones have stabilized enough that different hardware and OS are much less of a concern. Screen resolution still seems to be in flux.

2 Likes

Thanks @darrell.grainger - very in depth reply. I agree with all of this. For me, the key risk of mobile displays at this moment (for mobile web sites at least) is screen resolution. So rather than testing “devices” (literally) we’re testing screen resolutions and orientations (portrait/landscape).
For web apps, I’m not thinking about versions of OS etc, just the browser app being used.

I also take an approach that we use chrome’s emulator and then switch to some physical devices (initially 1 sample android phone/tab and Apple phone/tab) towards end of development (assumption is that we will find minimal number of bugs on a physical (in terms of UI and functionality) that can’t be seen on the emulator).
If we feel some further device coverage is required then I may dip into Browserstack for a month.

Hello,

As per my last experience issues or bugs are different as per the devices ,OS,Bowers,network so you have to focus on that devices only which is mostly used by your customers. It is also not that much time consuming…

But refer this some basic test scenarios to you also… https://www.testrigtechnologies.com/25-test-scenarios-for-mobile-app-testing/

See this is where I think I’m a bit of a pedant - in my brain this isn’t “device” testing but rather OS. As an example, would you find a bug specifcally on Samsung S9 Vs Huawei P30. No (I doubt) - each device is (likely) to be an equivalent representation of a configuration of screen size, OS. So to get back to the original point and perhaps being a pedant actual device testing probably isnt required, but more so testing a coverage of other variables such as screen size, OS, browser app. Etc