Scott Hanselman

How do we know if mobile apps are secure?

February 17, 2014 Comment on this post [20] Posted in Musings
Sponsored By

You know how we're always telling out non-technical non-gender-specific spouses and parents to be safe and careful online? You know how we teach non-technical friends about the little lock in the browser and making sure that their bank's little lock turned green?

Well, we know that HTTPS and SSL don't imply trust, they imply (some) privacy. But we have some cues, at least, and after many years while a good trustable UI isn't there, at least web browsers TRY to expose information for technical security decisions. Plus, bad guys can't spell.

image

But what about mobile apps?

I download a new 99 cent app and perhaps it wants a name and password. What standard UI is there to assure me that the transmission is secure? Do I just assume?

What about my big reliable secure bank? Their banking app is secure, right? If they use SSL, that's cool, right? Well, are they sure who they are talking too?

OActive Labs researcher Ariel Sanchez tested 40 mobile banking apps from the "top 60 most influential banks in the world."

40% of the audited apps did not validate the authenticity of SSL certificates presented. This makes them susceptible to man-in-the-middle (MiTM) attacks.

Many of the apps (90%) contained several non-SSL links throughout the application. This allows an attacker to intercept the traffic and inject arbitrary JavaScript/HTML code in an attempt to create a fake login prompt or similar scam.

If I use an app to log into another service, what assurance is there that they aren't storing my password in cleartext? 

It is easy to make mistakes such as storing user data (passwords/usernames) incorrectly on the device, in the vast majority of cases credentials get stored either unencrypted or have been encoded using methods such as base64 encoding (or others) and are rather trivial to reverse,” says Andy Swift, mobile security researcher from penetration testing firm Hut3.

I mean, if Starbucks developers can't get it right (they stored your password in the clear, on your device) then how can some random Jane or Joe Developer? What about cleartext transmission?

"This mistake extends to sending data too, if developers rely on the device too much it becomes quite easy to forget altogether about the transmission of the data. Such data can be easily extracted and may include authentication tokens, raw authentication data or personal data. At the end of the day if not investigated, the end user has no idea what data the application is accessing and sending to a server somewhere." - Andy Swift

I think that it's time for operating systems and SDKs to start imposing much more stringent best practices. Perhaps we really do need to move to an HTTPS Everywhere Internet as the Electronic Frontier Foundation suggests.

Transmission security doesn't mean that bad actors and malware can't find their way into App Stores, however. Researchers have been able to develop, submit, and have approved bad apps in the iOS App Store. I'm sure other stores have the same problems.

The NSA has a 37 page guide on how to secure your (iOS5) mobile device if you're an NSA employee, and it mostly consists of two things: Check "secure or SSL" for everything and disable everything else.

What do you think? Should App Stores put locks or certification badges on "secure apps" or apps that have passed a special review? Should a mobile OS impose a sandbox and reject outgoing non-SSL traffic for a certain class of apps? Is it too hard to code up SSL validation checks? '

Whose problem is this? I'm pretty sure it's not my Dad's.


Sponsor: Big thanks to Red Gate for sponsoring the blog feed this week! Easy release management: Deploy your SQL Server databases in a single, repeatable process with Red Gate’s Deployment Manager. There’s a free Starter edition, so get started now!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service
February 17, 2014 5:23
One mitigating factor here is that phrase, "This allows an attacker to intercept the traffic...". While if you connect to Wi-Fi at a coffeeshop, you're no more secure than anyone else on the Internet, most mobile apps are usually connecting via their LTE/3G connection, a connection that is much more difficult for the average script kiddie to tamper with.
February 17, 2014 5:48
Well, I thought a security audit was done when mobile apps are certified. If not, why does it take a week or two to get an app certified?
February 17, 2014 7:11
@Tyrone: I suppose you are talking about publishing an app on the App Store. Even if it was true, there is still the Play Store case, where there isn't any validation at all. The SDK documentation do inform the developers about best security practices to ensure that users sensible data remain private and non readably by other applications on the device. Not sure about data transmission over the network though.

My rule of thumb is to never use the same password across the different mobile apps & websites. For more sensible apps like my bank account, I have 2 factor auth setup which is the only reason why I would accept to use it in the first place.

I agree though, this is a super important problem that needs to be solved.
February 17, 2014 16:32
the most reasonable solution should probably be to split network access permissions between secure network communications through an API that handles the connection, requires TLS and checks certificates, and an unrestricted network communications, which lets you do pretty much whatever you want with sockets.

That still doesn't stop developers to store passwords in cleartext on a device, but that's a slightly different problem. Browser apps for the desktop might also be storing passwords in plaintext in e.g. local storage. There is no security in the green lock icon there either.
February 17, 2014 16:57
@Paul Betts - I know a LOT of people that are connecting to the Internet from their smartphones and tablets only when a free (or paid) WiFi connection is available, which is something very common in many countries (people have WiFi at home, in the office, in most public squares, in taxis, restaurants and coffee shops..)
February 17, 2014 17:11
How are mobile apps meant to store passwords?

If the app encrypts the password, it will need to keep the key somewhere on the device.
February 17, 2014 17:22
Apps for these consumer-directed lockdown devices should provide a manifest of the hosts they are allowed to access and should be displayed on the app information.

If a manifest says the app will only access *.microsoft.com and I trust Microsoft, then I will install the application.

As a user, I don't like that the app I'm installing is going to be sending whatever information to any host that I don't know of. Locking it down would be a huge benefit to both consumers and serious producers of apps.
February 17, 2014 17:53
It seems that there is a lack of strong guidance on how to best implement a secure mobile application. Most apps just want to allow the user to get leverage data from a social network. As OAuth seems in tatters, and the rewards seem to heavily out weigh the risks, it seems the natural thing to do is to get product to market, and deal with security later. DotCom bubble attitude all over again?
Anyone have any pointers on how to do this correctly?
February 17, 2014 19:27
It seems to me that mobile OS's need their own version of "the padlock," to let users know whether SSL is currently being used or not. The challenge is that the full-screen nature of mobile apps means apps can easily spoof such a thing. Maybe a hardware LED?

Or, more simply, just give people a setting to disallow all non-SSL network traffic from apps (and possibly even make it "on" by default).
February 17, 2014 21:48
@Bill P. Godfrey: Mobile apps only need to store *some form* of the password or not even that, just retain access to it. For instance, an app could generate a private key and encrypt the password against one of the app's company's public keys and then just store that encrypted text or dispatch it elsewhere "where only it knows"... of course it'd also have to store its own keys (and knowledge) somewhere too, so we're really just moving the problem around.

(Continuing @Everybody now...)

However, maybe that's what's needed. If apps could store their secure data in something that's tamper proof.

WILD SPECULATIVE EXAMPLE: Like maybe apps could use a tiny, separate bit of RAM that's kept alive all the time whether the phone is on or not, (which would thus zero out when voltage is no longer applied) and then, somehow, only grant right to access portions of that RAM to particular apps... like maybe by calculating a cryptographic signature of the first or last 10K instructions of the App's instruction set and using that for authenticating the app to get access to that piece of RAM or something...

I suspect for most applications, building this kind of thing into the hardware (or utilizing similar hardware if it already exists!) would be overkill. Security is only as good as it's weakest link, and focusing too much on technical aspects is sure to keep us from seeing more obvious attack points.

The problem with making too stringent of requirements of apps to meet is that these requirements may just give incentive create a monoculture/framework to secure data, where everybody is doing the same thing. When everybody starts doing the same thing to secure data, that gives incentive for attackers to attack that framework to find weaknesses, because once a weakness is found, that weakness could be used to attack everybody. My made-up example above is a prime example of something that could fall to this kind of attack and fail.

Certainly computers can be kept relatively secure if they "do the same thing", but that doesn't mean my statements are without merit; In every Linux, Mac, Android, or Windows machine of a common kind you have the same potential weaknesses. Over the years, the security designs generally utilized have changed due to different kinds of attack becoming readily available and certain evolving designs proving more useful.

Heterogeneity is your friend.

Also, app developers may certainly see the flaws of a particular design but, due to corporate pressures, aren't in any position to do anything about that design, even if it violates recommended guidelines... or for that matter, laws.

It would not be that all involved intentionally would violate law; it generally would be the case that those that have the power to require changes in the design might have a rather *lenient* interpretation of law, or would otherwise convince themselves they aren't in violation of them regardless of what is actually written.

Honestly, given the way many laws are written, I wouldn't blame them.

We don't need or want everybody to do the same thing. We just need everybody to avoid doing stupid things. (This also sums up my whole philosophy about life.)
February 18, 2014 0:12
The problem with SSL is that it doesn't support embedded devices very well. I'm thinking of the "internet of things" here or as Dr. Pizza put it the "internet of (insecure) things". Do you want anybody to be able to mess around with your Wifi router, thermostat or sprinkler system? These devices need to be able to authenticate users in a secure way. Unfortunately, SSL isn't a good solution for these devices due to A) expiring certificates (how do you update certs in these devices) and B) hostname/ip address are not known at the time the certificate is issued.
February 18, 2014 1:27
iOS has a keychain class that allows you to store strings securely. The app should also be using hashing and not encryption for any passwords that need to be stored locally. Bcrypt is cross-platform and perfectly capable for this job. I'm pretty sure its more a case of lazy coding than luck of APIs or libraries to offer truly secure apps.
I agree that any other online communication should be over SSL etc. Thanks
February 18, 2014 18:48
I'd like to see all SSL all the time and I'd like to see more done to help me a user understand the permissions the apps need. Why does app x need to see my contacts or have full access to the network? Will it work if I don't allow it (of course it's very hard to NOT allow it the way the install works). App stores should forbid (which isn't the same as enforcing) the storage of clear text passwords or password hints and I'd like to see the OS try to flag any credentials sent as clear text - hash it on the client and send only.
February 18, 2014 20:12
Well, I think obviously as an IT pro among many, it's our problem. It's not your Dads (unless he also works at MS :))
February 18, 2014 20:15
@christos Matskas

How would the app use the password to sign into the server next time you open it once it's hashed?
February 19, 2014 1:37
Apple provides the Keychain service for apps to store passwords. The contents of the keychain are encrypted, are not stored in backups if the backups are not encrypted and cannot be easily lifted from the device because the contents of the keychain are not available when the device is restarted and the user has not unlocked it yet.
me
February 19, 2014 17:06
<Grammar Police>
You know how we're always telling out non-technical.....
....our non-technical.....
</Grammar Police>

I think this is ultimately an Appstore responsibility. When we shifted from "you can install anything you like" to "You can only install stuff from our appstore" customers intrinsicly expect that only "well behaved" apps will be allowed in the store - and why wouldn't they?

Asking a user if an app can have permission to x,y and z is pointless. 99% of users don't even read the box, it's just one more button to click before they get their app installed.
February 21, 2014 9:52
@Bill, in my opinion, an app should never store user credentials at all. I would instead save a token locally and use that token to communicate to the server. That way no credentials can be obtained on device, and the server is in complete control of the token.
February 27, 2014 17:03
Mobile apps are relatively new, so they will take a while till reach a stable usage point (in the sense of things like security).

So, till there I think would be a lot easier to just not use apps when sensitive information would be use.

For sure, I really would not use banks apps or any that would involve money... especially my pennies!

Also, I agree with Tudor: it seems that people in US (or people in the US conurbations) tend to think that their technological/network reality is widespread. On several places/countries people would use a mobile device in order to use WiFi in the first place, in order to not pay the expensive data plans or just because there is just not other way to get connected.
March 06, 2014 0:28
It seems to me that there really is no consensus and the safest thing to do for consumers is to only trust the browsers.

It would really be a good marketing practice for companies that deal on sensitive information to show to the user that they take security seriously and place it front and center.

Anyone know if PCI compliance looks at mobile applications. They audit websites and back-end systems but what about mobile apps?

Thanks,
Pedro.

Comments are closed.

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.