Security Testing - Mobile (Jailbreak/Rooted Devices)

Hey Fellow-MoTs,

Thought I’d turn to you guys for perspective and understand if any of you have done security testing regarding mobile apps. I’ve not, and this is new to our team. Doing a bit of Googling, the internets seems to suggest a website Cydia ( but look way too easy to be true. Note the wider objective is test some work we’ve done in preventing our application to be accessible on a jailbroken iPhone or rooted Android device. I thought I’d find some more concrete understanding of how to go about this but that’s been a challenge. So my question is, who’s done it before and can you shed some light on the best way possible of testing it particularly for iOS. I was able to setup a rooted emulator but the iOS was the challenging part.


It’s not really a test eng. question you commonly get, you need to be hanging out with the security pros on the boards that cover this topic area far better. You would also have some criteria like minimum OS version I think. Unless I grab our security engineer, who has used that tool I believe, and uses quite a few others; I would not know how to verify an app would abort, it’s not a thing within our remit, since if a user decides to install our, or your application on a broken device they are breaking the T’s&C’s for using your service or apps on a fruit based device anyway.

So legally you have recourse if it is used on a broken fruit-based OS I imagine, but I’m assuming that you are really wanting to prevent reverse engineering if your app code? On non-fruit devices I always assumed that anyone can read our code, so I guess your question is more tightly focused. Is that your real test-case?

1 Like

Is it even possible to prevent this?

A good security option is to make sure as little as possbile sensitive (user) data is hanging around in your app. If there’s a particular piece of information that should not be seen by tech-savvy users then your best course of action is to obfuscate that or remove it from persistent storage.

I mean, it’s compiled code that’s running on a device you don’t control. People have been finding ways to break that since forever, I’d look at another avenue to remove/reduce the risk of people finding out whatever it is that you don’t want them to find out.


Hi both,

Thanks for the replies, appreciated. We did flag to our customer some of what was highlighted here and made the adjustments you recommended around access to the sensitive data. Always resourceful MoT peeps.

1 Like

It’s very normal to not know Jesse. It’s an area developers normally need some reading up to do, not a easy area to test. Testing that any data you as a tester can find “at rest” is encrypted, is hard enough, because you need the devs to help you a lot anyway.

That’s increasingly been my job as a more experienced tester, explaining to stakeholders where gaps in knowledge lie and dispelling assumptions. It completely depends on which sector your mobile app serves. I’m in a remote access product, so security (penetration) for us is pretty much alongside what it would be for a banking app, and we pay an external vendor to run static and some runtime checks, and also code scan once a year.