What's happening in your mobile testing world right now?

I’d like to get a general sense check of what’s happening with mobile testing in your workplace.

  • What mobile testing related objectives do you and your colleagues have?
  • What challenges do you currently face with implementing a mobile testing strategy?
  • What tooling are you using and how is it helping/hindering?
  • What latest innovations, models and techniques are supporting your mobile testing efforts?
  • What are some things you’d like to try yet haven’t been able to? How come?
  • What learning material are you using to support you?

Thanks for sharing.

I appreciate these are lots of high-level questions, we can always go deeper into topics as they arrive on this thread, by creating new topic posts here on The Club.

2 Likes
  • Testing against the most visited top 6 browsers device combinations from our site
  • Not specific working on mobile test strategy at this moment
  • BrowserStack but it’s slow and not all browser/device combination you can make. We do this manually and that takes time also
  • We use a browser extention to quicly scan on different viewports if we can all ready spot some issues. We also have some automated tests in Playwright for that part.
  • Testautomation with BrowserStack. Cost time and money to setup and have not yet a good feeling about the reliability of that step and what it will bring us.
  • Just keeping an eye out on the web/courses/meetups
3 Likes

In my previous role I worked extensively on mobile and was even iOS automation champion in the company. I can share a few insights, but won’t be able to say what’s going on right now. I work with BE and Web only right now (not even mobile view, it’s doesn’t work for current web application, that’s another discussion).

What mobile testing related objectives do you and your colleagues have?
We had a goal as mobile/QE to release mobile apps (both iOS and Android) more often. At the time when I started working with mobile we released every 4-6 weeks, when I left we were doing weekly releases. Yes, every week users get a new app version. To be honest, some users even complained it was too often.

What challenges do you currently face with implementing a mobile testing strategy?
At that time it was too much outsourced to QA and developers were just building cool features. QA did testing, bug verification, regression and managed whole release process. I approached iOS lead with a simple diagram where bottleneck is and what we should do. Since then we started a long 3 years journey to reshape the process, engage everyone in the team to all mobile testing activities and also upskilling everyone on writing tests - unit, integration, visual, UI. Constant quality advocacy was part of strategy as well.

What tooling are you using and how is it helping/hindering?
My rule of thumb to use the same tools as engineering team uses or it’s native to mobile framework. It’s a challenge to engage developer using Appium or any other external tool. We used iOS testing tools like XCTest and XCUITest. For Android it was Espresso and Robot Framework. All tests lived in the same code base. No external mobile testing repo.

What are some things you’d like to try yet haven’t been able to? How come?
I am not working with mobile now. However. I am curious about Maestro, but I think I can only run it locally, cloud version is already paid even for personal use. I am curious about Swift testing framework that is alternative to XCTest. To do so, I have kicked off building iOS app with the help of AI. I am not an iOS dev, but it’s interesting to learn how to do it and how to deploy app to App Store. Super excited about it.

What learning material are you using to support you?
Vendors docs, AI assistance, googling, some YouTube videos. I think I saw something about Maestro on MoT when I searched. I need to check it out.

2 Likes

Mobile Testing Objectives

From my experience working with mobile devices (physical devices, emulators, and BrowserStack),

  • Validate UI/UX on a wide range of screen sizes, OS versions, and device manufacturers
  • Previously maintained a in-office physical device lab (manual inventory, wiki documentation).Transitioned to using BrowserStack + Chrome DevTools to simplify device coverage.
  • Projects which I worked on had both mobile and desktop versions of the site requiring thorough testing across both platforms. Latter single website for build.Making sure consistency in all version
  • Requirements and estimation do not include the combination to be testing but expectation was must work in all combination.Had to explain the reason for the estimations
  • Focus on functional and API testing for both iOS and Android mobile web and apps

Current Challenges in Mobile Testing

  • OS versions, screen resolutions behaviours create unpredictable bugs.
  • Reproducing issues that only occur in a specific country or region is often difficult.
  • Fixes done in one region sometimes need retesting in another, causing delays.
  • Mobile UI tests are slow and sensitive to platform-specific behaviours.
  • Frequent changes in development builds can break automation
  • Device version specific bugs may not consistently appear on BrowserStack.
  • Local devices are harder to maintain; relying only on BrowserStack sometimes causes bandwidth and latency issues. Simulating poor connectivity, switching Wi-Fi or unstable network conditions remains challenging.

**Tools Used **

BrowserStack Helps:

  • Provides access to a wide range of devices without maintaining a physical lab.
  • Easy cross-device and cross-browser testing.
  • Built-in screenshots, network logs, video recordings.

Hinders:

  • Occasional slowness and device availability issues.
  • Some device-specific bugs do not reproduce consistently.

DevTools (Chrome, Safari Web Inspector) Helps:

  • Real-time debugging for mobile web and hybrid apps.
  • Inspect network calls, console logs, throttling to simulate slow or unstable network conditions.
  • Quick device emulation for responsive layout verification.

Hinders:

  • Emulated behaviour does not always match real hardware.
  • Need to switch tooling between iOS (Safari Inspector) and Android (Chrome DevTools).
  • Performance metrics and interactions differ slightly from real devices.

Like to Explore Next

Appium & AI Driven Mobile Testing

  • Interested in using Appium for iOS,Android automation.
  • Want to explore AI-assisted automation

Learning Materials

  • YouTube channels (Automation Step by Step, Naveen AutomationLabs)
  • Udemy courses
  • Test Automation University
  • BrowserStack documentation & webinar tutorials
2 Likes

I see mobile testing continuing to drive testers building apps locally as being normal testing practice. Lots of developer tools being leveraged from testers on this front.

When it comes to UI automation, mobile apps still do not seem to get access to as many advances as web apps do, agent use for example. On the flip side of this, mobile apps tend to put ease of use and intuitive use at the forefront of there design and I often still find a 20 minute test session can be much faster and find more things than a lot of the automated UI coverage when it comes to a lot of mobile apps.

Oh and 16k page size support being mandatory on playstore but no devices with that default setting available including browserstack devices.

2 Likes

100% echo some of the other stuff here. Folks want regular releases. Mobile web testing seems harder than it should be. Automation maintenance in mobile is a constant, given the rapid pace of change.

Since Maestro is my specialist subject:

I mentioned in TWiQ that Maestro tests a device rather than an app. That’s handy for some of the esoteric bugs, since you can actually drive through device settings to create the variation in setup that you need. You can also use to test mobile web - it’s just “stuff on a screen” as far as Maestro’s concerned.

Maintenance of the tests is just a real thing in Maestro as it would be anywhere else. Perhaps moreso, since Maestro is entirely divorced from the implementation. It’s just taps, types and swipes, with a few assertions thrown in. It’s all written in YAML so there’s no coding needed and it’s largely human readable. And it’s fairly easy to set Cursor against too, if you like that kind of thing.

@nat You’re right about the cloud stuff all being for money. There’s a 7-day trial you can play with. Most folks don’t. Maestro itself is open source. The desktop app is free too. Loads of folks are running that in CI with no need to hand over any money at all. You only really need Maestro Cloud once you get to needing parallelism to bring down your feedback time. If you want to get serious about the CI stuff, take a look on GitHub, there’s loads of examples of how to do it.

I was a Maestro user and fanboy first, then an open source contributor, then a Maestro community member, long before I ever earned money from it. I’m not selling anything (beyond the joys of open source software).

3 Likes