TestBash Mobile 2022 - Mobile Test Management Done Right with Daniel Knott

In this talk, @dnlknott explains what is needed in order to establish a reliable, lightweight and lean mobile test management process and what are the most important parts of the planning and execution steps.

We’ll use this Club thread to share resources mentioned during the session and answer any questions we don’t get to during the live session.


Resources Given During the Talk

Questions Answered Live:

  1. What do you think about physical devices vs emulators? (Philip Wong)
  2. How much time/effort does it cost to manage the beta-testing group of customers? It’s not enough to just “have them”, what do you expect from them? (@maaike.brinkhof)
  3. How to get people more interested/excited about all these scenarios for mobile testing? (Anonymous)
  4. Mobile Energy Consumption - Is this something we should get more into? (@bart_knaack)
  5. What is the best way to collect data on the target (potential) customers? (@deborahreid)
  6. What are your thoughts on build variants? Do you have any scenarios where they can work well or do you think they create more overhead? (@jaswanth)
  7. How to get people interested/involved in more than just functional mobile testing like security/accessibility etc? (Anonymous)

Questions Not Answered Live:

  1. How do we tackle regression testing for federated release of mobile releases? (Anonymous)

not sure what you exactly mean by federated.
However, when I was working with 20 teams on a central app we had a planned code freeze every 2 weeks. Once the code was frozen all the automated checks were executed over night. In the morning all teams had to check the results. In case of red tests they had to investigate for the root cause. Once all tests have passed, we had 1-2 days of final regression testing of each team to be confident enough to release to our customers.
This was also the reason why we added several safety nets e.g. internal releases, beta releases and then the staged roll-out to the customers. For each stage we were checking our monitoring and logging to see if something is going wrong.



Probably the steepest problem we face when mobile testing, how can we ensure users don’t uninstall. This used to be easy to do on desktop using all kinds of dirty tricks if you like to call it that.

  1. What kind of bar can test engineers set up to prevent or simply to detect this perceived quality fail?
  2. You mentioned “interrupt testing” , putting a button and having a call. Do we mean phone calls to the device? Are there any easy ways of doing this like whatsapp isntead having a similar effect, or does it have to be a phone call? Is there a OS behavior difference we want to be looking out for?

@dnlknott just giving you a nudge here in case you missed this notification. Will you be able to answer @conrad.connected (if you haven’t already separately)?


Sorry for replying soooo late :(.

Regarding your first question @conrad.connected. Don’t do dirty tricks with your customers :), they will notice and will never come back. Maybe you can ask your users for regular feedback within the app in form of a survey. This might help you to get some insights.
However, if users delete the app you have no chance in getting more information about the WHY they deleted the app.

Second question.

On the emulators and simulators from android and iOS you can trigger an incoming phone call. However, interrupts might already be a press of the volume button, or when you put the device in the standby mode. Depending on your app and use case, you can think of potential interrupts and add them to your testing check list.
I don’t know of any OS difference here. Just watch out for weird things to happen with your app. Maybe the app is crashing due to the interrupt, or when the app is coming back from the background task, is the app in the same state as it was before? Has the data changed?

I hope this was helpful, if not let me know.



Definitely, thanks.

  1. I’ve never considered going pack to the in-app survey as an idea. We have free and paying users, and so even knowing if the user is a free user when you pop a survey up is going to help us get better feedback from one or the other group of users. Good idea. We will need a proper surveys web app service I guess.

  2. So the simulator looks like it is more useful after all, when it comes to incoming calls as an interruption. There is so much to easily forget to test for on mobile.


1 Like