TestBash Mobile 2022 - Mobile Test Management Done Right with Daniel Knott

In this talk, @dnlknott explains what is needed in order to establish a reliable, lightweight and lean mobile test management process and what are the most important parts of the planning and execution steps.

We’ll use this Club thread to share resources mentioned during the session and answer any questions we don’t get to during the live session.


Resources Given During the Talk

Questions Answered Live:

  1. What do you think about physical devices vs emulators? (Philip Wong)
  2. How much time/effort does it cost to manage the beta-testing group of customers? It’s not enough to just “have them”, what do you expect from them? (@maaike.brinkhof)
  3. How to get people more interested/excited about all these scenarios for mobile testing? (Anonymous)
  4. Mobile Energy Consumption - Is this something we should get more into? (@bart_knaack)
  5. What is the best way to collect data on the target (potential) customers? (@deborahreid)
  6. What are your thoughts on build variants? Do you have any scenarios where they can work well or do you think they create more overhead? (@jaswanth)
  7. How to get people interested/involved in more than just functional mobile testing like security/accessibility etc? (Anonymous)

Questions Not Answered Live:

  1. How do we tackle regression testing for federated release of mobile releases? (Anonymous)

not sure what you exactly mean by federated.
However, when I was working with 20 teams on a central app we had a planned code freeze every 2 weeks. Once the code was frozen all the automated checks were executed over night. In the morning all teams had to check the results. In case of red tests they had to investigate for the root cause. Once all tests have passed, we had 1-2 days of final regression testing of each team to be confident enough to release to our customers.
This was also the reason why we added several safety nets e.g. internal releases, beta releases and then the staged roll-out to the customers. For each stage we were checking our monitoring and logging to see if something is going wrong.



Probably the steepest problem we face when mobile testing, how can we ensure users don’t uninstall. This used to be easy to do on desktop using all kinds of dirty tricks if you like to call it that.

  1. What kind of bar can test engineers set up to prevent or simply to detect this perceived quality fail?
  2. You mentioned “interrupt testing” , putting a button and having a call. Do we mean phone calls to the device? Are there any easy ways of doing this like whatsapp isntead having a similar effect, or does it have to be a phone call? Is there a OS behavior difference we want to be looking out for?