All posts by guya

Sharing sessionStorage between tabs for secure multi-tab authentication

tl;dr;
I’ve created mechanism that will leverage the secure nature of the browser sessionStorage  or memoryStorage for authentication and will still allow the user to open multiple tabs without having to re-login every time.

A refresher about relevant browser storage mechanism

  1. localStorage ~5MB, saved for infinity or until the user manually deletes it.
  2. sessionStorage ~5MB, saved for the life of the current tab
  3. cookie ~4KB, can be saved up to infinity
  4. session cookie ~4KB, deleted when the user closes the browser (not always deleted)

 

Safe session-token caching

When dealing with critical platforms it is expected that the session is ended when the user closes the tab.
In order to support that, one should never use cookies to store any sensitive data like authentication tokens. Even session-cookies will not suffice since it’ll continue to live after closing the tab and even after completely closing the browser.
(We should anyway consider to not use cookies since these have other problems that are need to be dealt with, i.e. CSRF.)

This leaves us with saving the token in the memory or in the sessionStorage. The benefit of the sessionStorage is that it’ll persist across different pages and browser refreshes. Hence the user may navigate to different pages and/or refresh the page and still remain logged-in.

Good. We save the token in the sessionStorage, send it as an header with every request to the server in order to authenticate the user. When the user closes the tab – it’s gone.

But what about multiple tabs?

It is pretty common even in single page application that the user will want to use multiple tabs. The afordmentioned security enahncment of saving the token in the sessionStorage will create some bad UX in the form of requesting the user to re-login with every tab he opens. Right, sessionStorage is not shared across tabs.

Share sessionStorage between tabs using localStorage events

The way I solved it is by using localStorage events.
When a user opens a new tab, we first ask any other tab that is opened if he already have the sessionStorage for us. If any other tab is opened it’ll send us the sessionStorage through localStorage event, we’ll duplicate that into the sessionStorage.
The sessionStorage data will not stay in the localStorage, not even for 1 millisecond as it being deleted in the same call. The data is shared through the event payload and not the localStorage itself.

Demo is here

Click to “Set the sessionStorage” than open multiple tabs to see the sessionStorage is shared.

Almost perfect

We now have what is probably the most secure way to cache session tokens in the browser and without compromising the multiple tabs user-experience. In this way when the user closes the tab he knows for sure that the session is gone. Or is it?!

Both Chrome and Firefox will revive the sessionStorage when the user selects “Reopen closed tab” and “Undo close tab” respectively.
Damn it!

Safari does it right and don’t restore the sessionStorage (tested only with these 3 browsers)

For the user the only way to be completely sure that the sessionStorage is really gone is to reopen the same website directly and without the “reopen closed tab” feature.
That until Chrome and Firefox will resolve this bug. (my hunch tells me that they will call it a “feature“)

Even with this bug, using the sessionStorage is still safer than session-cookie or any other alternative. If we’ll want to make it perfect we’ll need to implement the same mechanism using memory instead of sessionStorage. (onbeforeunload and alike can work too, but won’t be as reliable and will clear also on refresh. window.name is almost good , but it’s too old and has no cross-domain protection)

Sharing memoryStorage between tabs for secure multi-tab authentication

So… this will be the only real safe way to keep an authentication token in a browser session and will allow the user to open multiple tabs without having to re-login

Close the tab and the session is gone – for real this time.

The downsides is that when having only one tab, a browser refresh will force the user to re-login. Security comes with a price, obviously this is not recommended for any type of system.

Demo is here

Set the memoryStorage and open multiple tabs to see it shared between them. Close all related tabs and the token is gone forever (memoryStorage is just a javascript object)


P.S.
Needless to say that session management and expiration should be handled on the server side as well.

Leeching an FTP with Python

TL;DR;

This script will leech all the files from a folder in an FTP. It’s especially appropriate for dealing with enormous amount of files – hundreds of thousands or even millions.

My FTP issues

I have set up an IP security camera to save an image to my shared hosted FTP whenever it recognised any movement. The camera was cheap, old and not that accurate, so it saved lost of photos – more than half a million.

I needed to delete most of these photos but not all of it. Some of the photos captured interesting moments and I wanted to save these. I couldn’t just delete the folder, I had to download and check every photo. Checking the photos wasn’t as difficult as it might sound. Most of it were either almost identical or completely empty (zero bytes). Downloading it was the problem.

The 10000 limit
If you’re using a shared hosting it’s likely that your FTP server is limited. By default most shared FTP servers are probably running PureFTP and have a default limit of 10,000 files. This means you can only list 10,000 files and you will not even know how much files are in there.
You can probably ask your hosting provider to increase the “ftp recursion limit”, but I’m not sure they will be willing to up high enough.
Anyway, it’ll be cumbersome to deal with so many files and most FTP clients will freeze even trying to list less the 10k. I’ve tried a few on different OSs, eventually FileZilla for Windows seemed to be the best. But still, dealing with so many files was an extremely tedious process. And it’s even worse since I can’t even tell how many files are in that folder and how many times I will have to repeat this tedious process:

  1. Connect to the server and list the folder – 10 minutes for listing 10,000 files
  2. Even TranZilla failed to list the files from time to time – try again
  3. Move all the listed files to my local folder – ~1-2 hours for very small files
  4. If it fails for some reason – Repeat all steps
  5. If it succeeded – Repeat  all steps

Overall if I wanted to do it manually I would have to repeat that annoying process for hundreds of times (including failures).

Python to the rescue

Obviously python is an amazing scripting language (and beyond). Writing a python script to leech an FTP is very easy and straight forward to create. I was able to PoC with just a few lines of code after looking at the ftplib docs.

Eventually I added a few extra features like logging and retries. But not too much, it was supposed to be quick and save me time – and it definitely did :)

In order to make even more robust one would consider adding stuff like full tree traversal (leech the whole FTP and not just one folder),  multithreading to download from multiple connection simultaneously, and more. All were beyond the scope of this script and would be relatively easy to add in Python.

The Supervisor

While the leacher.py script is supposed to run continuously until it leeches the whole folder, it might still break in some cases. And on the Mac the machine goes to sleep and stops the script.

That’s why I added this supervisor.py script which runs the leacher.py again if it breaks and also prevents the system from sleeping on the Mac.

How to

Download both leacher.py and supervisor.py from here. Edit the params on the bottom of  leacher.py with your FTP info.

Run like this: python supervisor.py leacher.py

Watch it leech. Check leacher.log for more verbose info.

Summary

Overall the leacher.py script processed about half a million files and saved me a lot of time and annoyance.

leacher_log

Python is awesome!

How to know when Chrome console is open

tl;dr;

Although it’s not supposed to be supported – it’s possible to know whether the Chrome console is opened or not.
Check it out.

Reddit discussion.


Ever wondered if it’s possible to tell whether the browser’s Development-Tools are opened or not. Apparently it’s not possible in Chrome. And that’s a good thing, it’s no website business to know when I inspect their code.

The thing is… it is possible to tell.

When you run console commands it runs slower when the console is opened. That’s it. You simply run console.log and console.clear a few times and if it’s slower – than the console is open.

With this hack you can find out pretty reliably when the console is open and when it is closed.

Frankly, I don’t see how this can be mitigated without harming the performance of the browser. When the console is opened it has to be slower. Hence this hack will most likely continue to work. And I assume it’ll work similarly on other browsers and with other consoles.

Although others will claim otherwise, the only way I see this hack being used legitimately is by geeks to impress their peers.
Other than that the only valid reasons that I can think of are malicious, but I’m not gonna tell you how 😉


Making it perfectly accurate

In the demo I’ve used a benchmark ratio of 1.6 to determine if the console is open or not. In a perfect implementation the ratio will be dynamic and will change between environments.

Currently if a user will open the page with the console already opened it’ll be detected only after the user will close it at least once.

It’ll be possible to create another independent benchmark to tell whether the console was opened when the page first loaded.

Demo is here

To Listen Without Consent – Abusing the HTML5 Speech

tl;dr;
I found a bug in Google Chrome that allows an attacker to listen on the user speech without any consent from the user and without any indication. Even blocking any access to the microphone under chrome://settings/content will not remedy this flaw.

Try the live demo… (Designed for Mac  though it will work similarly on any other OS)

Watch the video…


The Sisyphus of computer science

Speech recognition is like the Sisyphus of computer science. We came a long way but still haven’t reached the top of that hill. With all that crunching power and sophisticated algorithms, computers still can’t recognise some basic words and sentences, the kinds that the average human digest without breaking a sweat. This is still one of the areas that humans easily win over computers. Savor these wins, as it will not last much longer;)

One must appreciate Google for pushing this area forward and introducing speech recognition into the Chrome browser. The current level of speech support in Chrome allows us to create application and websites that are completely controlled form speech. It open vast possibilities – form general improved accessibility to email dictation and even games.

The current speech API is pretty decent. It works by sending the audio to Google’s servers and returns the recognised text. The fact that it sends the audio to Google has some benefits, but from applicative point of view it will always suffer from latency and will not work offline. I believe that the current speech support was introduced with Chrome 25. From Chrome 33 one can also use Speech Synthesis API. – Amazing!

But…
Before this fine API we currently have, Google experimented with an earlier version of the API. It works quite the same, the main difference is that the older API doesn’t work continuously and needs to start after every sentence. Still, it’s powerful enough and it has some flaws that enable it to be abused. I believe this API was introduced with Chrome 11 and I have a good reason to believe it was flawed since than.


More Technical Details

Basically, this attack is abusing Chrome’s old speech API, the -x-webkit-speech feature.
What enable this attack are these 3 issues:

  1. The speech element visibility can be altered to any size and opacity, and still stay fully functional.
  2. The speech element can be made to take over all clicks on the page while staying completely invisible. (No need to mess with z-indexes)
  3. The indication box (shows you that you’re being recorded) can be obfuscated and/or be rendered outside of the screen.

The  POC is designed to work on Chrome for Mac, but, the same attack can be conducted to work on any Chrome on any OS.

This POC is using the full-screen mode to make it easier to hide the “indication box” outside of the screen.
It is not mandatory to use the HTML5 full-screen; it’s just easier for this demo.

As you can see in the demo and video there is absolutely no indication that anything is going-on. There are no other windows or tabs, and no some kind of hidden popup or pop-under.
The user will never know this website is eavesdropping.

In Chrome all one need in order to access the user’s speech is to use this line of HTML5 code:
<input -x-webkit-speech />

That’s all; there will be no fancy confirmation screens. When the user clicks on that little grey microphone he will be recorded. The user will see the “indication box” telling him to “Speak now” but that can be pushed out of the screen and / or obfuscated.

That is enough in order to listen to the user speech without any consent and without giving him any indication. The other bugs just make it easier but are not mandatory.

(For the tree in the demo I have used a slightly altered version of the beautiful canvas tree from Kenneth Jorgensen)

— The bug was reported to Google.

grey_mic

Found a CSRF Flaw in a Big E-Commerce Website

tl;dr

I stumbled upon some CSRF flaws in a very popular e-commerce website. CSRF flaws are generally overlooked and the only way for you as the user to minimize the risk is to logout from a website after you finished using it. This will limit the window of being vulnerable to attacks to the time you spend on a website. I have disclosed my finding to the e-commerce website and will post it here after they’ll finish fixing it.


This is how these CSRF flaws generally works

When you login to a website you get back a cookie that indicates who you are and the fact that you are authenticated.
Now, for better user experience and so you won’t need to re-login, most websites tell your browser to keeps the cookie for very very long time (up to 10 years is considered safe).
The problem is that if a website suffers from any CSRF flaws, and many still do, from now-on every-time you visit any unrelated internet content it may be attacking you. Think of all these slightly phishy content you stumbled upon over the past years, some of it could have been attacking you.

A famous case of CSRF attack against a bank was using a legitimate AD and abused a flaw in the bank website to transfer user’s money. Gmail suffered from a CSRF flaw in its early days, leaking all of its user’s contacts.

CSRF flaws are used to steal sensitive data from users and to perform actions on the user’s behalf. The flaw I found enables both – an attacker can steal user’s personal data and also mess with his assets.


How I stumbled upon it

I was surfing on an open public WiFi – generally a bad thing to do but I needed to. This public WiFi had a phishy name “eyes2″ and there are few other “eyes” circling around – “eyes1″, “eyes2″, “eyes3″, etc’. Call me paranoid but it seems to me that these access points were put there in order to eavesdrop. Might be just for fun might be more. Anyhow, I generally don’t care as long as I keep all of my traffic in SSL. I don’t care them getting my metadata. So I went to this huge e-commerce website just to check something and was amazed it’s not all SSLed. Wow… I wondered… what kind of data have I just leaked to the MITM from the “eyes2″ access point?! Apparently, if someone is eavesdropping on my connection he now knew exactly who I am and more.

The fact that any website that deals with even slightly sensitive data, and doesn’t use SSL for all of its traffic is a flaw on its own. But SSL is not related to these specific flaws, in fact using SSL doesn’t help to prevent CSRF flaws. It’s only because I wanted to know exactly what kind of data this website leaked by using plain http and not https (SSL), I found out it’s also vulnerable to CSRF attacks.


Where are all the details?

I reported my finding to the e-commerce website. It took me way longer to find the appropriate way to contact them than to find the flaws and PoC it. I did eventually managed to report it and they were very responsive about it and seemed like they already started to fix it. I will post all the details after they’ll finish fixing it.

As a website owner it’s important to remember to implement CSRF prevention from the get go. Most web frameworks have their own solutions already. It’s very easy to overlook it. It’s very easy to use something like JSONP and to forget how vulnerable it can be, for example.


How to protect yourself

CSRF is generally based on cookies, what you can do to protect yourself is to logout or delete your cookies after you finished using a certain website. That won’t be bulletproof since you’ll still be vulnerable to attacks while you’re logged-in. The only way to be completely safe is to use only 1 window with only 1 tab while you are logged-in.

Obviously all of this is a complete hassle, and website owners should be the ones responsible for their CSRF flaws. The user can’t be expected to do it.

If your using firefox you may use something like noscript, which also involve some level of annoyance.

Abusing The HTML5 Data-URI

[Update: Some of these examples were mitigated in Chrome 38 and 39]

After seeing in the previous post how Data-URIs can be used as a mechanism to easily carry malicious code, I’ll elaborate more about the issues it presents.

Some of it merely exists from the way Data-URIs are designed and implemented, and some of it might be considered as security bugs in the browsers.

Using Data-URI to manipulate the address bar

The simplest thing an attacker can do is to add spaces after the “data:” in the URI and by that it can make it look like it some kind of a Chrome internal page.

It will also change the link status in the bottom of the browser. The link will show “data:” hiding all the base64 code that is there. It’s a way to manipulate the status bar without the need of JavaScript. Hence the link will be manipulated even in an environment that doesn’t allow JavaScript.

Combine that attack with the previous example of the phishing SVG and you can get catastrophic results. Lots of users might believe this is an internal Chrome page.

Live example (Open in Chrome)

data_address_bar

While in the above example the user will see only “data:..” in the address bar. If he’ll feel uncomfortable the user can still examine the address bar and might find the hidden base64 code.

The next example will show how to prevent the user from examining the address bar at all, and it’ll always show “data:” The 3 dots indicating there might be more to it – will be removed as well.

By making the original content larger than ~28KB the address-bar will always show “data:” and only data.

data_status_bar

Live example of only data in the address-bar (Open in Chrome)

data_big_address_bar

While Chrome is the most vulnerable to the Space attack, other browsers will fall for it too.

FireFox trim the spaces when you click on the link, but an attacker will still be able to manipulate the status-bar (at the bottom of the browser). As we’ll see in the next vector.

Safari doesn’t trim the spaces but instead convert it to %20 or any other Unicode escaped representation of a space. It’ll work with any combination of unicode spaces.

data_safari_address_bar

Mobile Chrome for iOS.  It was a bit surprising to see that the Chrome version for iOS will fall for the “Over 28KB base64” attack.

data_chrome_iphone

It interesting to see that even Chrome for iOS which is very different environment, share the shell code with all of the other Chrome browsers.


Tricking the user to download malicious content using the DATA URI

We already seen how one can abuse the browser’s address-bar and the status-bar by simply adding spaces after the “data:” in the link.

But what if we add any other character, other than space.  In that case, the users will be prompted to download a file (by default the user won’t be prompted and the file will just download)

How convenient for an evil one. Combining that with the previous stuff…

The user will see data:http://google.com/graphics/doodle.svg in the status-bar, clicking the link will automatically download the attacker’s malicious SVG file that is hiding in the base64 code inside the link.

data_doodle_chrome

Click to download doodle.svg (not really a doodle)

As noted before, FireFox trim the spaces after the “data:”, but one can use %20 instead. Also, FireFox does a good job and put an ellipsis in the middle of the link in the status-bar. So the user will see the suspicious gibberish base64 at the end, But that can easily overcome with simply adding spaces at the end of the link as well.

data_doodle_firefox

Click to download doodle.svg (FireFox version)

The interesting thing is that this kind of attack doesn’t limit us to SVG, any kind of file can be downloaded this way, any kind of binary file as well.

How about an EXE (that EXE does nothing but to echo some text to the terminal)
(Will add it latter)

Remember COM executables? Only few bytes, 32bit windows will still run it, still many of these are out there.

COM executable

ZIP is a great bad vector IMHO

ZIP with badies inside (not real badies just text files)

chrome_zip_download

While on windows an attacker will also need to trick the user to change the extension of the file or to make him “Open With…” the file with a certain app. On the MAC these extension doesn’t matter much.

The file will be opened or executed according to its real type.

The new Mac OSes have this great feature called Gatekeeper that makes running applications way more secure in general.

The default settings is “from app-store AND identified developers”. How difficult is it for a motivated attacker to become an “Identified Developer”?

If the user have disabled Gatekeeper the app will just run when clicked.

It will probably work on old macs, but anyway I think that the most dangerous attack will be using just a zip file with all kind of badies inside.

The user flow might be:

  1. Click on a link -> File is immediately downloaded (there is no wait as the file is already embedded no the page)
  2. Click on the downloaded file -> file is automatically extracted (if the zip is small enough the user won’t notice much going on as the extraction will be fast)
  3. Malicious apps will be spread all over the user’s Downloads folder.
  4. “Hopefully” one day the user will notice these apps and its inviting names and will run one of these – thinking he downloaded it himself.

These are most of the browsers I’ve tested on, other browser may behave differently, generally it works the same on ubuntu as well.

IE (Internet Explorer) – does not suffer form most Data-URIs flaws, it does it by not supporting most of its features.  I don’t think that getting away with it by not supporting it is generally a good thing.

Final notes about these vectors: (repeating some from the previous post)

  1. Remember that no JavaScript is needed for any of this, all that is needed is a link. It’ll work just the same with JavaScript disabled.
  2. No server is involved either.
  3. All an attacker need is a bunch of strings and the user’s browser will do all the rest.
  4. AV won’t scan these links.
  5. Not easily blockable – no domain to block.
  6. More easily shared and distributed.
  7. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  8. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  9. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

 

— Reported the bug to Google.

SVG For Fun and Phishing

What an awesome format is SVG, so powerful and so well supported by browsers. And yet it is barely being used, it’s not getting the love it deserves. Well, browsers love SVG, perhaps too much…

SVG files are like little bundles of joy. Encapsulating graphics, animations and logic. One can write a full app or game all encapsulate in one SVG file. That SVG file can be compressed into a binary file with the extension of SVGZ and browsers will still accept it.

Its way less powerful than Flash but the concept is similar – vector graphics and logic in one binary file. And like Flash SWF files – these files tend to get viral and to be re-distributed. And by that I mean, once you release your attack in the wild it can get hosted from many other servers. A good example would be, tiger.svg.

Remember the lovely Flash dog?  It can work just the same and even worse.

SVG also run from local files. By default, on Windows, SVG files are opened in IE, which will run the script with local privileges when it’s double clicked. 

Anyhow, SVGs have some flaws built in into them; many are known some are new. I will argue that even without the new flaws an SVG file is somewhat dangerous by-design. I wanted to see how easy it would be to abuse SVG for phishing. I picked an easy target – Chrome’s “Sign In” page. It was pretty easy to create an almost fully functional version of the Chrome Sign In page.

Check it out only 5kb of SVG “Image”

Compressed as an SVGZ, only 2kb (Chrome will run it just fine) 

Note: Google already changed the appearance of this page, but it almost identical to the previous version.

This page is generally here  (google already changed the way it looks)

Letting SVG files execute JavaScript is actually the root of the problem. I’m not sure it serves any real purpose in today’s web.

A simple attack might goes like this:

  • The attacker will send the victim an email with a malicious SVG file “Checkout this cool image / animation”.
  • The victim will downloads the SVG and click / double click on it.
  • SVG is opened in the browser.
  • Attack taking place.

The JavaScript that will run will have local privileges and can easily attack the user (limited to the browser sandbox of course). It can for example – execute multiple cross-domain CSRF attacks (cookies are sent normally with every requests) and/or load multiple other attack vectors. Can be abused for spam, and that is not even illegal as long as long as the attacker doesn’t do anything too malicious.

You may be thinking “so what?!” you can script the user from an html page just as well.

There are few differences, as most users will look at SVG file as just another image.

  1. When you double click on an image to view it can’t execute anything –SVG can.
  2. SVG files get redistributed – there are numerous clip art sites that will host the evil SVG for the attacker.
  3. The malicious code embedded in SVG files will sustain after editing the file in graphic editors like Adobe Illustrator and Inkscape.

I would say that the main problem here is not what SVG files are capable of doing, and it’s more about the way they can get malicious code slip through the user’s normal defenses.

Wait there is more…


Even more fun with SVG and HTML5 Data-URI

Another great feature of HTML5 is Data-URI. Now, SVG is working great with Data-URI. Malicious SVG works amazingly great with data-URIs.

SVG encoded, as Base64 will run directly from a link.
Some might call this a feature but it can be exploited for phishing attacks.

POC of an SVG phishing attack embeded in an HTML5 Data-URI

Some of the attacker benefits are:

  1. This is just a link, no need to host anything.
  2. AV won’t scan these links.
  3. Not easily blockable – no domain to block.
  4. More easily shared and distributed.
  5. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  6. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  7. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

Everything said here is valid to all Data-URIs supported formats; notable is also text/html:
Here is an example

Actually Data-URIs have their own set of problems, which are not necessarily related to SVG. It works perfectly with SVG but the issues are more general. I’ll elaborate more in another post.

More about the demo (Chrome Sign In)

I don’t want give too many ideas for the bad guys, but the possibilities here are endless, I can already think of far more nasty vectors than this demo.

This specific demo has an image of the Google logo. I managed to create the Google logo as an SVG in about 7kb, but the Google logo in the demo is anyhow small and not too noticeable, it felt like a waste of KBs.

I found that the font used be Google for their logo is catull, which is an old style serif typeface, and is similar to … you guessed it… Georgia, that was good enough. Georgia is preinstalled on all OSs.

I know that might look horrible to fonts and esthetics lovers, but the average victim will easily fall for it.

One of the most important features of this attack is the SVG text-input. The user will need to enter his credentials somewhere.

Text-inputs are not natively available in SVG, though there were some attempts to create them.

I didn’t create fully functional text-inputs, didn’t think it’s appropriate for me to do it at this point – for various reasons. For one, I didn’t want to make it too easy to replicate this attack in the real. I’m sure that nearly perfect SVG text-inputs can relatively easily be created – one just need enough motivation.

What about mobile?

Smartphones are just fine with SVG, more on that later as well.

Some tip to keep you safe

  1. Be alert when clicking on links that direct to SVG files or Data-URIs.
  2. Don’t double-click on SVG files to preview it in your browser.
  3. Don’t preview unknown or unchecked SVG files in your browser.
  4. Don’t export SVG from Adobe Illustrator and Inkscape without knowing where it came from and making sure it has no malicious script.

The Pains and Remedies of Android HTML5

Prologue: I’ve written most of this post some months ago and somehow didn’t publish it. Looking at it now, it’s a good reminder of some of the pains I already forgotten. The Android version statistics already changed a bit by now, but, still today and even with the new type of measuring by Google – the most problematic Andorid versions which are 2.2.x – 4.0.x are still running more than 50% of Androids out there. Hence everything here still applies. (note that most of the bugs are in 4.0.x and not in 4.1.x and above).
I’ve updated all the stats in the article to reflect the latest published stats.

These issues refer to HTML5 content running inside the native Android browser as well as  inside the native WebView (i.e. PhoneGap and alike)

———————————–

The promise of HTML5 is great, to be able to use the same code base on all clients and even on the server is really compelling.  While iOS has provided what it promise long time ago already – you can relatively easily create compelling HTML5 apps that will run on the iOS. Android HTML5 capabilities are still lagging far behind.

On paper Android 4.0.x (20.6%) was enhanced with many awaited features of HTML5. Similar to iOS 5. For example, Android 4.0.x was added with the important overflow:scroll, but, the Android 4.0 version is flawed. It has many other great features which are, sadly, mostly buggy. In fact this version is a buggy regression to the Android browser and WebView HTML5 capabilities.

It gets much better in Android 4.1, but this version still only holds only (36.5%) of Androids (48.6% including 4.2 & 4.3). Still today the most common version is 2.3.x which holds (44%) and that version can not be avoided. Generally, if you’ll try to push the HTML5 envelop of the Android it’ll probably push you back.

Even with the new and optimistic way of Google to measure Android versions distribution it’s still clear that 2.2.x and 2.3.x and 4.0.x are still massively out there and needs to be supported.

Having said all that, it doesn’t mean you can’t create decent apps with HTML5 that will run properly on the Android. But you’ll have to consider its lacking abilities from the get go. Design the UI as simple as possible, without too many fancy CSS, images, and animations.

I will put here a list of some of the issues I had to go through while adopting HTML5 on the Android, I will keep this list updated.

Canvas:
Pain:
 Android 4.1 – 4.3 render duplicated HTML5 Canvas
Remedy: None of the parents HTML elements to the canvas should have overflow: hidden or overflow:scroll

Pain: In all Androids and especially 4.x canvas drawing performance are extremely reduced by using canvas effects like shadowColor.
Remedy: Try pre-rendering or adding the effects only when needed and/or once in every drawing cycles. For example, in a live drawing app – adding the effects only when the user stops to draw.

Network:
Pain: Android 2.x.x Making PUT (protocol) requests with no body will have no Content-Length header, it’s rejected by some servers/proxies i.e. NGNIX
Remedy: Configure NGNIX to accept it or send a {dummy: ‘data’} in the payload. i.e. $.ajax(‘PUT’, url, {dummy: ‘1’});

Pain: Android 2.x.x PUT (protocol) is cached on some versions of Android
Remedy: Cache-bust it, cache-bust all requests to the server even if it’s PUT.

Content:
Pain: Box-scroll was introduced in Android 4.0.x  but it has numerous issue on that version.
Remedy: Don’t use box-scroll for anything under than 4.1, or use iScroll or similar. The best, most performant, solution is to use postion:fixed for headers and footer and to simulate box-scroll.

Pain: CSS pseudo :active selector is not working on 2.x, working badly on 4.0.x.
Remedy: It is only perfect from Android 4.1 and above, try to use your own implementation using touch events.

Pain: Making fixed content (position: fixed) issues on 2.x.x
Remedy: Works fine only  when the ViewPort is not resizable, use this in the html head:
<meta name=”viewport” content=”width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no” />

Pain: Scrollbars shows over fixed content.
Reedy: When using a native shell, scrollbars can be removed using
webView.setVerticalScrollBarEnabled(false);
webView.setHorizontalScrollBarEnabled(false;

Pain: Jumpy text inputs
Remedy (native shell): <activity android:windowSoftInputMode=”adjustNothing” />
Remedy 2 : don’t use *{ -webkit-backface-visibility: hidden} or try to override it with *{ -webkit-backface-visibility:visible !important; }

Pain: Styling text-inputs that has focus
Remedy: http://stackoverflow.com/a/9464837/275333, http://java-cerise.blogspot.co.nz/2011/10/dodgy-double-input-fields-on-android.html

Pain: Android 4.0.x, any tap implementation will not be responsive enough, it will miss a lot of taps. (works fine with all other versions)
Remedy: Essiest will be to revert to clicks on this buggy 4.0.x version

Pain: In Android 4.0.x long press is selecting text, on all other OSs it’s resolved with the css *{ -webkit-touch-callout: none; }
Remedy (native shell): Use this Java snippet http://stackoverflow.com/a/11872686/275333

Pain: Duplicated Input fields on Android 4.0.x. it happens because android uses another native input for fast typing response, it doesn’t work well with scrollable content. (very ugly hack google, if I may)
There are tones of hacks for that out there, most of it doesn’t work or at least doesn’t work good across devices.
Remedy (native shell): If you run in WebView – Don’t put text inputs inside a scrollable iframe or content with overflow:scroll. Putting this in the activity will auto scroll to the text-input (similar to iOS) android:windowSoftInputMode=”adjustPan” – only works on Android 4.0.x, not working on Android 4.1 and above (yeah really).
adjustResize is working on all Androids I’ve tested, but that is less pretty and leads to jumpy inputs on older androids 2.x. adjustResize needs to be on the tags in order for it to work. I do not recommend that as well.
So to summarize the fiasco, adjustPan which give the best UX (similar to the iPhone) is only working on Android 4.0.x.
adjustResize which is still nice in terms of UX can be made to work with all versions of Android, but can cause issues (jumpy text-inputs) for old 2.x
Remedy 2: Put this style on the text input -webkit-user-modify: read-write-plaintext-only;  Not great since it’ll make typing slower, it’ll be up to impossible to enter text on some devices. Swype keyboard won’t work either.
Remedy 3: Shift the input element off the screen, and use the change event to render the text into another element. (this is too cumbersome, try to avoid it)

Misc:
Pain: HTML5 PushState is supported since Android 2.2, but somehow it was forgotten on Android 4.0 – 4.0.2 and some 4.0.3 devices. Told you these 4.0.x are cr*p…
Remedy: Make sure your HTML5 app works well for devices without pushState support. Try a 4.0 emulator.

Pain: Incorrect dimensions, sometimes innerWidth & innerHeight might still read 0 even after the DOM is ready.
Remedy:  Wait a few (~100 millisecond) after the DOM is ready to ask it what’s the window size is.
Remedy 2: Use screen.width & screen.height (you’ll have to calculate the toolbar height)
Remedy 3: get the width/height from the server (using something like wufl)
Remedy 4 (native shell):  Get the size from the native shell.

Pain: WebSockets are not supported at all.
Remedy (native shell): Use this WebSockets PhoneGap plugin. Don’t get bothered by sockets unless you really need to.

Pain: Web Workers doesn’t work at all
Remedy: Who cares..?!
Remedy 2 (native shell): Multi threaded, yeah baby.

Pain: Android 2.x misses a lot of scrolls attempt because it’s stuck in touchmove event (error is: “Miss a drag as we are waiting for WebCore’s response for touch down.”)
Remedy :( No real remedy, I’m pretty sure there is no sulotion for that and using something like iScroll won’t solve it either.

Pain: DOM manipulation is extremely slow.
Remedy: documentFragments might help but don’t count on it.
You’re left with tricks, for example, It far smother to change visibility than to add/remove DOM elements.
It’s better to pre-render and just show() or hide() as needed, especially when animations are involved.

Some related links:
PhoneGap vs. Native: Some Thoughts on Going Native
Discussion in Hacker News
These are one year old but still very relevant (sadly)
Regarding point 1: Don’t remove images from the DOM, instead replace the src to a very small image (leason learned by the linkedin mobile team), 2: You can handle that, 3: There are good ways to do caching, 4: These days there’s reasonable debuging tools.

HTML5 for extending the device battery life (PDF)

Some other pains & remedies

 

Epilogue:  Everytime I come across a cool HTML5 example and wonder how well it tuns on mobile, I try it on iOS  and mostly like what I see. Only to be disappointed with the way Android native browser run it. And I’m not talking solely about the old 2.x.x androids that mostly run these in an unacceptable way. Even the newer androids with new version of the OS doesn’t play smoothly as the mobile safari or even UIWebView. The only solution to HTML5 on Android at the moment, is to keep it simple, very simple.

When targeting an HTML5 app to run on mobile browsers, one can not assume that  her users will use anything other than the native browser (as opposed to the more capable Chrome for Android, for example). But,  if your running your HTML5 inside a native shell (i.e. PhoneGap) There are few projects that attempt to solve the native WebView problem, by letting us bundle a better webview.
https://github.com/thedracle/cordova-android-chromeview
https://github.com/davisford/android-chromium-view
https://wiki.mozilla.org/Mobile/Projects/GeckoWebView
More on these will follow…

 

Protecting Your Smart Phone, the Basics

iPhone

  1. Don’t jailbreak, a not jailbroken iPhone is a pretty secure device.
  2. Use PIN code Settings -> General -> Passcode (and not something like 1234)
  3. Make sure data is really encrypted – default since iPhone 4 (which have hardware encryption). If you have an older version go to Settings -> General -> Passcode and look for “Data Protection is Enabled” on the bottom.
  4. Don’t install any profiles you’re not absolutely sure about. I saw that some ads company started to use these profiles in order to overcome the App Store restrictions. If you see something like this don’t approve it unless your absolutely sure. Here’s some more info about the danger of malicious profiles.
  5. Consider using alphanumeric passcode by setting “Simple Passcode” to “Off”
  6. Don’t use Consider not using “Find My iPhone”. This is a trade off, “Find My iPhone” is really great tool for finding your lost phone. But, there is a 1 failure point which is your apple ID. Accessing it will gives attackers your exact position and an easy way to wipe all of your phone data.

Android

  1. Don’t root your phone
  2. Use a screen lock
  3. Encrypt data – works better from Android 4.0 and above, might affect performance (it does not encrypt external SD card)
  4. Use a security app like Lookout or AVast – it’s free!
  5. Don’t install an app unlesss you have decent amount of confidance in it, also check the permisisons it requires. Remeber to uninstall it if it’s useless.

We all know that Android is open and its apps needs no approval, which make it more vurenable by nature. This openness has another aspect of vurnability, external SD cards will have variant quality and because of that the Android OS doesn’t encrypt it. It can’t promise a good enough performance on cheap external memory. Which make sense in a way, your somewhat compromising security by being open.

Windows 8 Phone
Never had a windows 8 phone only 7.5, but it’s obvious that Microsoft is batting big on their most loyal enterprise consumers, that need enterprise security. From reading online it seems that it has a built in encryption but not for the SD card (same as Android).

Common sense still applies.

  1. Use screen lock 
  2. Encryption is built in for you, just don’t save anything important on the external SD card.