I’ve built a tool to automate mobile app testing. It uses a browser-based low-code test creator and cloud-hosted devices.
What is important to you as a tester when choose a tool to help with mobile app testing?
Tool website: https://moropo.com/
Ungated sandbox of the test creator: Live Sandbox · Moropo Test Creator · Test more of your app in 10x less time
Please be brutally honest with me, I can take it
I would not consider this tool because it does not fit the way I want to work.
A low-code tool is a restriction I don’t need, and in the long run it costs more than maintaining my own code - yes, I have some work up front in creating my infrastructure, but I want to work from my IDE, I want the developers I’m collaborating with to work from their IDE instead of learning a new environment that will only do monkey-clicks on a mobile app.
The tool does too much - it maintains test scenarios, runs them, takes care of the reporting and notifications. At least one of those things will probably not be to my liking. It’s a Jack of all trades, master of none situation.
What I would appreciate is a tool that will integrate easily with my programming language and will solve one problem really well. easy and reliable actions across mobile devices? awesome! Great reporting tool? cool, a test runner that will make me ditch TestNG\pytest\NUnit? great. One ring to rule them all? I’ll pass.
Currently we support a YAML declarative format, e.g:
- text: "press me"
Am I correct in assuming you’d prefer a python format?
When you say “monkey clicks”, is the abilities or functionality that you think we’re not covering? Are you able to provide some examples?
I appreciate your input
I would suggest the same thing - use a python format.
Interesting, why is python popular?
As far I’m aware no one really does mobile app development in phython.
If you do something like this in YAML youll be at risk of either reinventing a turing complete programming language badly or not providing the features your users need (e.g. parameterization, loops).
Ask any dev how they feel about debugging a 1,000 line github actions workflow.
No, this assumptionis completely incorrect.
I don’t want a standalone tool. I want something that will reside inside my existing code and that I can easily mix with my other tools. A product I’ll consider will have SDKs in the supported languages, not a specific syntax for a custom made keyword magic.
When I said monkey clicks, I meant that your tool is not aware of my business logic, and provides a generic action suite over mobile devices. If I want to submit a form and then check that it is processed properly in the backend, a device clicking tool is not what I expect to be doing it with.
Interesting. Thanks for your detailed answers, Amit.
Our tool is black box, so we don’t have the ability to inspect the live the mobile app. We’d need to provide SDKs for all the various mobile app frameworks to do that and it would quickly break edge cases such as WebViews. I think Detox is a failed experiment on that front.
We can handle assertion on the backend so long as the backend exposes some kind of API for us to gain access.
What is the expected behaviour in the current generation automation tooling when it comes to asserting a form has been submitted? Would you check in the DB? Or do a GET on the API layer? Or would you simply have access to the server-side run time as part of the frontend e2e testing library and perform a “wasCalled” or similar on the backend function?
So, as an example - I’m working on an antivirus software (with a mobile client as well).
A simple flow in our system level automation is as follows:
Arrange: install the antivirus.
Act: Drop a malicious file on the protected system.
Assert: file is blocked on the device
Assert: file is in the quarantine folder (for clients that support this feature)
Assert: Alert is shown on the client's UI
Assert: check proxy to see that an event was sent to the server
Assert: Server DB has the respective event
Assert: the event is accessible through the server's UI\API
I know it seems excessive, but this way when something fails it’s easier to pinpoint the failure point, especially when working on less modern\mature applications which are less suited to a divide and conquer approach.
And I’m guessing I’m not the weirdest case out there.
Note that I’m mixing here operations on native UI, some APIs, potentially a browser on a different device, some ORM library to check the DB - The name for the tool I expect to use for those cases is “programming language”.
Thanks, Amit - super helpful and gave me some exciting feature ideas
Funny thing - today I had to go out for a while during working hours and I knew I would have to join an important testing meeting from outside.
And I knew that I would probably need to make a few Postman calls so the team would be able to check the logs for debugging.
Right before going out I tried searching for a Postman version for iOS, sadly I don’t think they have such.
All in all, I thought it was a funny coincidence - me thinking about a mobile testing app (postman mobile) and stumbling upon this thread just now.
I would probably pay a reasonable sum to buy such a tool for mobile, but since my workspace is mainly in Postman, I would’ve preferred postman to other apps right now.
A quick look from a perspective of 15 years of expertise in Mobile Quality Assurance.
You are entering a highly competitive field (low-code/no-code/scriptless mobile app automation) with huge companies and smaller companies with literally millions of dollars in funding.
It will be really hard to compete.
So let’s start with looking what’s on your website:
- Requested access but didn’t get it. Do you rely only on patient customers? It looks like you need to manually check each request.
- No clear pricing
- Minimum $590? That’s high.
- Maestro is free
- Appium is free
- Repeato is €70
- Sofy is $549
- Testim is $450
- No support for Cloud device labs
- No support for .ipa files
- No support for real iOS devices
- No community support
- All feature requests are made by you and one other guy
- Roadmap is empty
It doesn’t matter what’s important to me. The requirements of my client’s projects matter.
Thank you for your detailed and considered feedback @pwicherski - this is incredibly helpful.
As founders, we tend to get too close to our product/marketing to see what is missing/awry.
There is plenty we can work on here.
My pleasure @riglar, wish you guys the best