Information Security News
A mobile advertising company that tracked the locations of hundreds of millions of consumers without consent has agreed to pay $950,000 (£640,000) in civil penalties and implement a privacy program to settle charges that it violated federal law.
The US Federal Trade Commission alleged in a complaint filed Wednesday that Singapore-based InMobi undermined phone users' ability to make informed decisions about the collection of their location information. While InMobi claimed that its software collected geographical whereabouts only when end users provided opt-in consent, the software in fact used nearby Wi-Fi signals to infer locations when permission wasn't given, FTC officials alleged. InMobi then archived the location information and used it to push targeted advertisements to individual phone users.
Specifically, the FTC alleged, InMobi collected nearby basic service set identification addresses, which act as unique serial numbers for wireless access points. The company, which thousands of Android and iOS app makers use to deliver ads to end users, then fed each BSSID into a "geocorder" database to infer the phone user's latitude and longitude, even when an end user hadn't provided permission for location to be tracked through the phone's dedicated location feature.
In last couple of years, Ive been increasingly working on penetration testing mobile applications. I must admit: this is fun. Not only its a combination of reverse engineering (static analysis) and active packet/request mangling, but mobile applications bring with them a whole arsenal of new attack vectors (I plan to cover these in a series of diaries since I held a presentation about that last week at SANSFIRE we and I also attended the SEC575: Mobile Device Security and Ethical Hacking course with fantasticChris Crowley, one of the best SANS instructors for sure).
With Android and iOS being two main mobile platforms today its logical that most of the mobile penetration tests are concerned with them as well. Here and there I see Windows mobile, but since even Microsoft is giving hope on this platform it appears that we can safely decide to cover Android and iOS only.
Android being more open, I prefer to do penetration testing on Android applications. Typically, when an organization creates applications for several mobile platforms, they use same server infrastructure (i.e. web services). This is logical it would not make sense to have multiple server infrastructure that basically performs same activities for all platforms.
My process is to do the test on the Android application and then verify findings on the iOS app " />
In most of my engagements, I ask for a build without ProGuard. I guess my success ratio is around 50% - quite often I get the response back which says: This is how our production application looks like, and you, as an attacker, should be able to circumvent those protections, otherwise the application is secure.
After such an answer we go through the long cycle of explanation why obscurity is not security: why ProGuard adds nothing but delays activities a bit. And since penetration testers are almost always limited with time (which is not a problem for an attacker, once the application gets published), it is not in companys interest to waste a penetration testers time on deobfuscation. Sure, this should be noted as maybe an additional control (albeit weak).
While the screenshot above looks very difficult to analyze, even jadx, my favorite decompilation tool available at https://github.com/skylot/jadx can deobfuscate it a bit so its easier to work on such an app just selecting Tools -" />
If you perform penetration testing, how do you deal with obfuscation? Let us know!