Webcam spying with Chrome


Browsers doesn’t handle webcam permissions well enough. Users should be extremely wary about what’s going on in their browser. From a list of 30 bugs submitted to google regarding that issue, most have been fixed, but some are still alive.
The most obvious bug which is still live and kicking in all of the browsers is PopJacking which is  – clickjacking using popups. This flaw can be abused to trick users into allowing malicious access to their webcam, for example.

Video of the 5 POCs is here

Full text

More than a year ago (6.6.2014) I submitted a list of ~30 security bugs regarding the way Chrome handle WebCam access . These bugs were also regarding the way Chrome handled almost all other kind of special permissions. From webcam/mic access to location.

Some of these were related to bugs and bad implementation of popups and abusing it in relation to webcam access.

Yesterday Google made my bug report public so I figured it’s about time I’d share my findings (all of these links and info were private until now):

This is the original post I privately sent to Google, it has the info

A video with 5 different POCs

The POC and source code

The bug thread on google

While Google fixed most of these bugs some of these are still unfixed. But, even these who were fixed are not fixed good enough and are still vulnerable to PopJacking. Meaning, an attacker can still trick a user to allow webcam access – pretty easily.
PopJacking is merely clickjacking using a popup – probably the most overlooked flaw in browsers since clickjacking.

Another side note here is about Google behaviour regarding this bug:
At first they seemed thrilled about it, but than it took them almost a year to fix most of it. Only to eventually declare it as “Wontfix”.
One of the bug I submitted was opened as a different private bug  but, anyone can easily figure which one it is from the conversation in the currently opened bug thread.

From the way Google dealt with this bug and some other security bugs myself and others have submitted, it’s clear that Google will greatly prefer to dismiss security bugs as “Wontfix” or “not a bug”. Anything other the RCE or XSS will have difficulty to fit in.
I’m pretty sure that something like Clickjacking would have been immediately dismissed, only to realise afterwards the mistake that has been done.
More on that with some examples in a latter post.

So are we safe now?

– No.
It’s still too damn easy to trick a user to allow something like webcam access, and that’s valid to other browsers not just Chrome. Be extremely wary of where you click and what’s going on in your browser at all times. The indication that a website is accessing your camera is not clear enough – you gotta be wary. (FireFox indication is much better, btw)

Popups are evil

Beside the specific security bugs in popups and the way it can be exploited for PopJacking.
I would argue that there is not even one legitimate use of browser popups in term of user-experience.

Browser vendors should just kill popups all together, forever.


The never ending browser sessions


The concept of session memory is not valid anymore in today’s browsers. Even sessionStorage is not cleared after closing the tab. It’s easily revived when clicking on “Reopen closed tab”. That might seem as a bug – not if you look at the spec which is rather permissive, maybe too much.

So what’s the problem really?

Imagine you login to your bank website from a trusted 3rd party computer.
When you’re done, you simply click the X button to close the site assuming that you’re session will be done. This used to be true for many years, since it was common for critical websites like banks to store the authentication token in a session-cookie.
And session cookies, as the name implies are gone when the session is gone. The problem is that with tab browsing, and browsers running in the background that session might end long time after you clicked on the X.
This means that most of the time, anyone accessing that computer after you, will be able to continue where you left – logged in as you.

sessionStorage to the rescue? – not really

So if session-cookies are not good enough, what about that shiny sessionStorage?
It’s isolated per tab and cleared when that tab is closed.
It must be good – you click the X and it’s gone.
Well almost…
In Chrome and Firefox the session storage is easily revived with right click and “Reopen closed tab” and “Undo close tab” respectively.

This strange and unexpected behavior of the sessionStorage is still complying with the spec which is somewhat over permissive:
“The lifetime of a browsing context can be unrelated to the lifetime of the actual user agent process itself, as the user agent may support resuming sessions after a restart.”

We can argue whether this is a bug or not, but it’s definitely a bad feature and should be mitigated. We should have real session storage which we can trust to be cleared when we click on the “X”. Without unreliable tricks like onbeforeunload and alike.

Here’s a demo, close the tab and reopen it with “Reopen closed tab”  – the sessionStorage will be revived.

While Chrome and FireFox are acting badly and revive the sessionStorage, Safari and IE11 don’t revive it and are the safer browsers in that regard.

Bottom line

As a user, always always logout manually, never rely on just closing the tab or the browser.

As a developer, the only way to create real sessions that are gone when the user closes the tab is to keep anything critical in the memory and only in the memory. I’ve written more about it with examples in here.


Sharing sessionStorage between tabs for secure multi-tab authentication

I’ve created mechanism that will leverage the secure nature of the browser sessionStorage  or memoryStorage for authentication and will still allow the user to open multiple tabs without having to re-login every time.

A refresher about relevant browser storage mechanism

  1. localStorage ~5MB, saved for infinity or until the user manually deletes it.
  2. sessionStorage ~5MB, saved for the life of the current tab
  3. cookie ~4KB, can be saved up to infinity
  4. session cookie ~4KB, deleted when the user closes the browser (not always deleted)


Safe session-token caching

When dealing with critical platforms it is expected that the session is ended when the user closes the tab.
In order to support that, one should never use cookies to store any sensitive data like authentication tokens. Even session-cookies will not suffice since it’ll continue to live after closing the tab and even after completely closing the browser.
(We should anyway consider to not use cookies since these have other problems that are need to be dealt with, i.e. CSRF.)

This leaves us with saving the token in the memory or in the sessionStorage. The benefit of the sessionStorage is that it’ll persist across different pages and browser refreshes. Hence the user may navigate to different pages and/or refresh the page and still remain logged-in.

Good. We save the token in the sessionStorage, send it as an header with every request to the server in order to authenticate the user. When the user closes the tab – it’s gone.

But what about multiple tabs?

It is pretty common even in single page application that the user will want to use multiple tabs. The afordmentioned security enahncment of saving the token in the sessionStorage will create some bad UX in the form of requesting the user to re-login with every tab he opens. Right, sessionStorage is not shared across tabs.

Share sessionStorage between tabs using localStorage events

The way I solved it is by using localStorage events.
When a user opens a new tab, we first ask any other tab that is opened if he already have the sessionStorage for us. If any other tab is opened it’ll send us the sessionStorage through localStorage event, we’ll duplicate that into the sessionStorage.
The sessionStorage data will not stay in the localStorage, not even for 1 millisecond as it being deleted in the same call. The data is shared through the event payload and not the localStorage itself.

Demo is here

Click to “Set the sessionStorage” than open multiple tabs to see the sessionStorage is shared.

Almost perfect

We now have what is probably the most secure way to cache session tokens in the browser and without compromising the multiple tabs user-experience. In this way when the user closes the tab he knows for sure that the session is gone. Or is it?!

Both Chrome and Firefox will revive the sessionStorage when the user selects “Reopen closed tab” and “Undo close tab” respectively.
Damn it!

Safari does it right and don’t restore the sessionStorage (tested only with these 3 browsers)

For the user the only way to be completely sure that the sessionStorage is really gone is to reopen the same website directly and without the “reopen closed tab” feature.
That until Chrome and Firefox will resolve this bug. (my hunch tells me that they will call it a “feature“)

Even with this bug, using the sessionStorage is still safer than session-cookie or any other alternative. If we’ll want to make it perfect we’ll need to implement the same mechanism using memory instead of sessionStorage. (onbeforeunload and alike can work too, but won’t be as reliable and will clear also on refresh. is almost good , but it’s too old and has no cross-domain protection)

Sharing memoryStorage between tabs for secure multi-tab authentication

So… this will be the only real safe way to keep an authentication token in a browser session and will allow the user to open multiple tabs without having to re-login

Close the tab and the session is gone – for real this time.

The downsides is that when having only one tab, a browser refresh will force the user to re-login. Security comes with a price, obviously this is not recommended for any type of system.

Demo is here

Set the memoryStorage and open multiple tabs to see it shared between them. Close all related tabs and the token is gone forever (memoryStorage is just a javascript object)

Needless to say that session management and expiration should be handled on the server side as well.

Leeching an FTP with Python


This script will leech all the files from a folder in an FTP. It’s especially appropriate for dealing with enormous amount of files – hundreds of thousands or even millions.

My FTP issues

I have set up an IP security camera to save an image to my shared hosted FTP whenever it recognised any movement. The camera was cheap, old and not that accurate, so it saved lost of photos – more than half a million.

I needed to delete most of these photos but not all of it. Some of the photos captured interesting moments and I wanted to save these. I couldn’t just delete the folder, I had to download and check every photo. Checking the photos wasn’t as difficult as it might sound. Most of it were either almost identical or completely empty (zero bytes). Downloading it was the problem.

The 10000 limit
If you’re using a shared hosting it’s likely that your FTP server is limited. By default most shared FTP servers are probably running PureFTP and have a default limit of 10,000 files. This means you can only list 10,000 files and you will not even know how much files are in there.
You can probably ask your hosting provider to increase the “ftp recursion limit”, but I’m not sure they will be willing to up high enough.
Anyway, it’ll be cumbersome to deal with so many files and most FTP clients will freeze even trying to list less the 10k. I’ve tried a few on different OSs, eventually FileZilla for Windows seemed to be the best. But still, dealing with so many files was an extremely tedious process. And it’s even worse since I can’t even tell how many files are in that folder and how many times I will have to repeat this tedious process:

  1. Connect to the server and list the folder – 10 minutes for listing 10,000 files
  2. Even TranZilla failed to list the files from time to time – try again
  3. Move all the listed files to my local folder – ~1-2 hours for very small files
  4. If it fails for some reason – Repeat all steps
  5. If it succeeded – Repeat  all steps

Overall if I wanted to do it manually I would have to repeat that annoying process for hundreds of times (including failures).

Python to the rescue

Obviously python is an amazing scripting language (and beyond). Writing a python script to leech an FTP is very easy and straight forward to create. I was able to PoC with just a few lines of code after looking at the ftplib docs.

Eventually I added a few extra features like logging and retries. But not too much, it was supposed to be quick and save me time – and it definitely did 🙂

In order to make even more robust one would consider adding stuff like full tree traversal (leech the whole FTP and not just one folder),  multithreading to download from multiple connection simultaneously, and more. All were beyond the scope of this script and would be relatively easy to add in Python.

The Supervisor

While the script is supposed to run continuously until it leeches the whole folder, it might still break in some cases. And on the Mac the machine goes to sleep and stops the script.

That’s why I added this script which runs the again if it breaks and also prevents the system from sleeping on the Mac.

How to

Download both and from here. Edit the params on the bottom of with your FTP info.

Run like this: python

Watch it leech. Check leacher.log for more verbose info.


Overall the script processed about half a million files and saved me a lot of time and annoyance.


Python is awesome!

How to know when Chrome console is open


Although it’s not supposed to be supported – it’s possible to know whether the Chrome console is opened or not.
Check it out.

Reddit discussion.

Ever wondered if it’s possible to tell whether the browser’s Development-Tools are opened or not. Apparently it’s not possible in Chrome. And that’s a good thing, it’s no website business to know when I inspect their code.

The thing is… it is possible to tell.

When you run console commands it runs slower when the console is opened. That’s it. You simply run console.log and console.clear a few times and if it’s slower – than the console is open.

With this hack you can find out pretty reliably when the console is open and when it is closed.

Frankly, I don’t see how this can be mitigated without harming the performance of the browser. When the console is opened it has to be slower. Hence this hack will most likely continue to work. And I assume it’ll work similarly on other browsers and with other consoles.

Although others will claim otherwise, the only way I see this hack being used legitimately is by geeks to impress their peers.
Other than that the only valid reasons that I can think of are malicious, but I’m not gonna tell you how 😉

Making it perfectly accurate

In the demo I’ve used a benchmark ratio of 1.6 to determine if the console is open or not. In a perfect implementation the ratio will be dynamic and will change between environments.

Currently if a user will open the page with the console already opened it’ll be detected only after the user will close it at least once.

It’ll be possible to create another independent benchmark to tell whether the console was opened when the page first loaded.

Demo is here


This post was made public on 27.8.2015. It was previously disclosed privately to Google.

More info: Webcam spying with Chrome


Clickjacking and Spoofing the Google Chrome Permission Bar is Too Damn Easy

The permission bar in Chrome suffers from a list of bugs that can be combined in multiple ways in order to trick the user into granting access to privacy related features without being aware of it.

In some situations the user might realize that something bad just happened, but until he’ll take action to reverse it – his privacy will already be compromised. I.e. his camera will photograph him, his location will be revealed, etc’.

The Demos:

The demos are targeting for full access to the user’s camera and microphone, which are probably the most privacy related features.

  1. Wearing With Bars.
    This demo abuses the fact that it’s possible to create a completely identical fake permission bar. And that the real bar can be altered to show merely “http://” instead of the full text.
    The user will be tired will multiple fake bars until the real bar will appear.
  2. Clickjacking using popups
    Using 2 popups, and the way popups can be controlled, to clickjack the user to “Allow” the access to his camera. Even if the user will notice something phishy happened, and will take immediate action. It’s likely that his privacy was already compromised.
    Clickjacking using popup (Popjacking) is extremely overlooked by all browsers.
  1. Ubuntu.
    Chrome renders the permission bars differently on every OS. This is an example of how to mess with it in Ubuntu. This demo is not fully functional, just shows an example of what can be done.
    This flaw was mitigated in Chrome 35 thanx to the move to the Aura UI. But still, the permission bars renders and behave slightly different in Ubuntu.
  2. None Closable Popup.
    While this is more of a popup bug, it shows how this can be leverage to trick the user into allowing access to his camera.
    The permission bar should not be functional under extreme condition. In this case the popup is almost frozen but the “Allow” button is still functional.
  3. Over And Switch.
    The most basic clickjacking attack on the permission bar.
    The time it takes for the real bar to appear is probably the best time for clickjacking. (~130 milliseconds)

These are just a few examples, these flaws can be combined in multiple other ways.

The List of Flaws:

The list is not the most organised list; I basically threw-in most of the stuff I’ve stumbled into. The video is more explanatory.
Some of the items in the list are objectively bugs, and some may be considered as “by-design”. The by-design features or bugs should be mostly mitigated, though some of it is not harmful on its own.

  1. It’s easy to create a fake identical permission bar.
  2. Permission bars act and render differently between OSs each with it’s own flaws.
  3. Different types of permission-bars act and render differently.
  4. It’s possible to tell when the permission bar is opened and when it’s closed – using the content height (resize event is fired).
  5. Permission bars are not accessible. When enlarging fonts/zooming – permission-bars stay the same.
  6. Use window focus and blur events to tell when the window is clickable.
  7. The permission bar doesn’t have “the arrow” when opened in a popup (Mac only)
  8. The permission bar can be made to reopen constantly requesting it multiple times. Requesting the same permission multiple times will show the bar multiple times when clicking on “x” (infinitely)
  9. It’s easy to control the existent of the permission bar. Showing is obvious, hiding it using refresh and/or sub-domains.
  10. When using a very long subdomain the message will be turned into merely “http://“ (MAC only). Other OSs bar are truncated differently.
  11. Use staggering permission (multiple tries) – ask for camera if decline ask for microphone if declined ask for speech etc’.
  12. Infinite tries using infinite subdomains (if the user “Deny” simply redirect to another subdomain and try again)
  13. Resize the popup to show only the “Allow” button (Windows only)
  14. Resizing the popup smaller and than bigger – the Permission Bar UI will not revert itself. It’s possible to leave only the Camera Icons and the “Allow” button (Ubuntu)
  15. Inconsistency between OSs (MAC, Windows 7, Ubuntu)
  16. Information popover (when clicking on the camera icon in the address bar) can be messed-up. Especially on windows when the popup is small. On the mac it can be slightly messed using a long subdomain / domain.
  17. The “Allow” button can be clicked in a window, which is out of focus. Optimally the buttons in the permission bar will be functional only a second or so after the window received focus. This will mitigate clickjacking using popups.
  18. After the user approve, the camera icon in the address bar is not clear enough, will be missed by most users. It should be clear “Website is accessing your webcam and microphone!”
  19. On the MAC refresh remove the bar on win7 the bar stays after refresh. It can be removed using a redirect to a different subdomain.
  20. The red circle in the tab that indicates that the camera is being access is not existant in popups and hidden on a normal window when multiple tabs are opened.
  21. On HTTPS permission is sticky even if the SSL certificate is not valid.
  22. POPUPS flaws ->
  23. These are flaws on its own but are very helpful for jacking the permission bar.
  24. Tell the mouse location also from the parent window.
  25. Enforcing the size and position of the popup.
  26. Popups can be made un-closable using a while loop, resizing or moving (works only after the Dev tools were opened at least once) (might be possible without opening the dev tools)(easy to trick the user to open the dev-tools by asking him to type the shortcut)
  27. Clicking multiple times will open multiple popups- there is absolutely no delay. For example, tricking the user to click 10 times in a second during a game is pretty easy – it will result in 10 popups opening together (windows)
  28. Popups opening can be delayed for second, it may look like the popup was not generated from the click. This goes well with the previous flaw – the user will click multiple times before noticing anything wrong and multiple popups are scatter all over the screen. (windows)
  29. Using double-click its possible to calculate the different between 2 clicks and make the 2 popups open together. (windows)
  30. You can still do popunders in Chrome, jquery-popunder is a good example. (the user will be able to notice it openeing though)
  31. You can make a popup open on another screen, messing with moveTo. A small popup that is located on the far bottom of the other screen can go unnoticed.
  32. A popup can be made to jump between screen, stucking (no close). This is very annoying and confusing to the user. The popup will become semi transparent and very difficult to interact with.


The fastest mitigation might be to simply add 1 second where the buttons are disabled. It should happen every time the permission bar is opened or when the window is gaining focus. That will mitigate all the clickjacking attacks, but it won’t help other kinds of attacks (tricking the user to click allow using fake bars, for example)

The current permission bar is subtle and elegant, it looks great. But, it doesn’t serve it’s purpose. The Permission-bar should be bigger, clearer and completely separated from the content area.

An extreme mitigation will also eliminate popups completely. Popups are bad for the user experience even when used legitimately.
Obviously, popups can be abused for clickjacking of any web content and not only the permission-bars – this is extremely overlooked by all browsers.

The best solution for this problem is probably OS level permissions that will stay the same across browsers and OSs.

To Listen Without Consent – Abusing the HTML5 Speech

I found a bug in Google Chrome that allows an attacker to listen on the user speech without any consent from the user and without any indication. Even blocking any access to the microphone under chrome://settings/content will not remedy this flaw.

Try the live demo… (Designed for Mac  though it will work similarly on any other OS)

Watch the video…

The Sisyphus of computer science

Speech recognition is like the Sisyphus of computer science. We came a long way but still haven’t reached the top of that hill. With all that crunching power and sophisticated algorithms, computers still can’t recognise some basic words and sentences, the kinds that the average human digest without breaking a sweat. This is still one of the areas that humans easily win over computers. Savor these wins, as it will not last much longer;)

One must appreciate Google for pushing this area forward and introducing speech recognition into the Chrome browser. The current level of speech support in Chrome allows us to create application and websites that are completely controlled form speech. It open vast possibilities – form general improved accessibility to email dictation and even games.

The current speech API is pretty decent. It works by sending the audio to Google’s servers and returns the recognised text. The fact that it sends the audio to Google has some benefits, but from applicative point of view it will always suffer from latency and will not work offline. I believe that the current speech support was introduced with Chrome 25. From Chrome 33 one can also use Speech Synthesis API. – Amazing!

Before this fine API we currently have, Google experimented with an earlier version of the API. It works quite the same, the main difference is that the older API doesn’t work continuously and needs to start after every sentence. Still, it’s powerful enough and it has some flaws that enable it to be abused. I believe this API was introduced with Chrome 11 and I have a good reason to believe it was flawed since than.

More Technical Details

Basically, this attack is abusing Chrome’s old speech API, the -x-webkit-speech feature.
What enable this attack are these 3 issues:

  1. The speech element visibility can be altered to any size and opacity, and still stay fully functional.
  2. The speech element can be made to take over all clicks on the page while staying completely invisible. (No need to mess with z-indexes)
  3. The indication box (shows you that you’re being recorded) can be obfuscated and/or be rendered outside of the screen.

The  POC is designed to work on Chrome for Mac, but, the same attack can be conducted to work on any Chrome on any OS.

This POC is using the full-screen mode to make it easier to hide the “indication box” outside of the screen.
It is not mandatory to use the HTML5 full-screen; it’s just easier for this demo.

As you can see in the demo and video there is absolutely no indication that anything is going-on. There are no other windows or tabs, and no some kind of hidden popup or pop-under.
The user will never know this website is eavesdropping.

In Chrome all one need in order to access the user’s speech is to use this line of HTML5 code:
<input -x-webkit-speech />

That’s all; there will be no fancy confirmation screens. When the user clicks on that little grey microphone he will be recorded. The user will see the “indication box” telling him to “Speak now” but that can be pushed out of the screen and / or obfuscated.

That is enough in order to listen to the user speech without any consent and without giving him any indication. The other bugs just make it easier but are not mandatory.

(For the tree in the demo I have used a slightly altered version of the beautiful canvas tree from Kenneth Jorgensen)

— The bug was reported to Google.


Found a CSRF Flaw in a Big E-Commerce Website


I stumbled upon some CSRF flaws in a very popular e-commerce website. CSRF flaws are generally overlooked and the only way for you as the user to minimize the risk is to logout from a website after you finished using it. This will limit the window of being vulnerable to attacks to the time you spend on a website. I have disclosed my finding to the e-commerce website and will post it here after they’ll finish fixing it.

This is how these CSRF flaws generally works

When you login to a website you get back a cookie that indicates who you are and the fact that you are authenticated.
Now, for better user experience and so you won’t need to re-login, most websites tell your browser to keeps the cookie for very very long time (up to 10 years is considered safe).
The problem is that if a website suffers from any CSRF flaws, and many still do, from now-on every-time you visit any unrelated internet content it may be attacking you. Think of all these slightly phishy content you stumbled upon over the past years, some of it could have been attacking you.

A famous case of CSRF attack against a bank was using a legitimate AD and abused a flaw in the bank website to transfer user’s money. Gmail suffered from a CSRF flaw in its early days, leaking all of its user’s contacts.

CSRF flaws are used to steal sensitive data from users and to perform actions on the user’s behalf. The flaw I found enables both – an attacker can steal user’s personal data and also mess with his assets.

How I stumbled upon it

I was surfing on an open public WiFi – generally a bad thing to do but I needed to. This public WiFi had a phishy name “eyes2” and there are few other “eyes” circling around – “eyes1”, “eyes2”, “eyes3”, etc’. Call me paranoid but it seems to me that these access points were put there in order to eavesdrop. Might be just for fun might be more. Anyhow, I generally don’t care as long as I keep all of my traffic in SSL. I don’t care them getting my metadata. So I went to this huge e-commerce website just to check something and was amazed it’s not all SSLed. Wow… I wondered… what kind of data have I just leaked to the MITM from the “eyes2” access point?! Apparently, if someone is eavesdropping on my connection he now knew exactly who I am and more.

The fact that any website that deals with even slightly sensitive data, and doesn’t use SSL for all of its traffic is a flaw on its own. But SSL is not related to these specific flaws, in fact using SSL doesn’t help to prevent CSRF flaws. It’s only because I wanted to know exactly what kind of data this website leaked by using plain http and not https (SSL), I found out it’s also vulnerable to CSRF attacks.

Where are all the details?

I reported my finding to the e-commerce website. It took me way longer to find the appropriate way to contact them than to find the flaws and PoC it. I did eventually managed to report it and they were very responsive about it and seemed like they already started to fix it. I will post all the details after they’ll finish fixing it.

As a website owner it’s important to remember to implement CSRF prevention from the get go. Most web frameworks have their own solutions already. It’s very easy to overlook it. It’s very easy to use something like JSONP and to forget how vulnerable it can be, for example.

How to protect yourself

CSRF is generally based on cookies, what you can do to protect yourself is to logout or delete your cookies after you finished using a certain website. That won’t be bulletproof since you’ll still be vulnerable to attacks while you’re logged-in. The only way to be completely safe is to use only 1 window with only 1 tab while you are logged-in.

Obviously all of this is a complete hassle, and website owners should be the ones responsible for their CSRF flaws. The user can’t be expected to do it.

If your using firefox you may use something like noscript, which also involve some level of annoyance.

Abusing The HTML5 Data-URI

[Update: Some of these examples were mitigated in Chrome 38 and 39]

After seeing in the previous post how Data-URIs can be used as a mechanism to easily carry malicious code, I’ll elaborate more about the issues it presents.

Some of it merely exists from the way Data-URIs are designed and implemented, and some of it might be considered as security bugs in the browsers.

Using Data-URI to manipulate the address bar

The simplest thing an attacker can do is to add spaces after the “data:” in the URI and by that it can make it look like it some kind of a Chrome internal page.

It will also change the link status in the bottom of the browser. The link will show “data:” hiding all the base64 code that is there. It’s a way to manipulate the status bar without the need of JavaScript. Hence the link will be manipulated even in an environment that doesn’t allow JavaScript.

Combine that attack with the previous example of the phishing SVG and you can get catastrophic results. Lots of users might believe this is an internal Chrome page.

Live example (Open in Chrome)


While in the above example the user will see only “data:..” in the address bar. If he’ll feel uncomfortable the user can still examine the address bar and might find the hidden base64 code.

The next example will show how to prevent the user from examining the address bar at all, and it’ll always show “data:” The 3 dots indicating there might be more to it – will be removed as well.

By making the original content larger than ~28KB the address-bar will always show “data:” and only data.


Live example of only data in the address-bar (Open in Chrome)


While Chrome is the most vulnerable to the Space attack, other browsers will fall for it too.

FireFox trim the spaces when you click on the link, but an attacker will still be able to manipulate the status-bar (at the bottom of the browser). As we’ll see in the next vector.

Safari doesn’t trim the spaces but instead convert it to %20 or any other Unicode escaped representation of a space. It’ll work with any combination of unicode spaces.


Mobile Chrome for iOS.  It was a bit surprising to see that the Chrome version for iOS will fall for the “Over 28KB base64” attack.


It interesting to see that even Chrome for iOS which is very different environment, share the shell code with all of the other Chrome browsers.

Tricking the user to download malicious content using the DATA URI

We already seen how one can abuse the browser’s address-bar and the status-bar by simply adding spaces after the “data:” in the link.

But what if we add any other character, other than space.  In that case, the users will be prompted to download a file (by default the user won’t be prompted and the file will just download)

How convenient for an evil one. Combining that with the previous stuff…

The user will see data: in the status-bar, clicking the link will automatically download the attacker’s malicious SVG file that is hiding in the base64 code inside the link.


Click to download doodle.svg (not really a doodle)

As noted before, FireFox trim the spaces after the “data:”, but one can use %20 instead. Also, FireFox does a good job and put an ellipsis in the middle of the link in the status-bar. So the user will see the suspicious gibberish base64 at the end, But that can easily overcome with simply adding spaces at the end of the link as well.


Click to download doodle.svg (FireFox version)

The interesting thing is that this kind of attack doesn’t limit us to SVG, any kind of file can be downloaded this way, any kind of binary file as well.

How about an EXE (that EXE does nothing but to echo some text to the terminal)
(Will add it latter)

Remember COM executables? Only few bytes, 32bit windows will still run it, still many of these are out there.

COM executable

ZIP is a great bad vector IMHO

ZIP with badies inside (not real badies just text files)


While on windows an attacker will also need to trick the user to change the extension of the file or to make him “Open With…” the file with a certain app. On the MAC these extension doesn’t matter much.

The file will be opened or executed according to its real type.

The new Mac OSes have this great feature called Gatekeeper that makes running applications way more secure in general.

The default settings is “from app-store AND identified developers”. How difficult is it for a motivated attacker to become an “Identified Developer”?

If the user have disabled Gatekeeper the app will just run when clicked.

It will probably work on old macs, but anyway I think that the most dangerous attack will be using just a zip file with all kind of badies inside.

The user flow might be:

  1. Click on a link -> File is immediately downloaded (there is no wait as the file is already embedded no the page)
  2. Click on the downloaded file -> file is automatically extracted (if the zip is small enough the user won’t notice much going on as the extraction will be fast)
  3. Malicious apps will be spread all over the user’s Downloads folder.
  4. “Hopefully” one day the user will notice these apps and its inviting names and will run one of these – thinking he downloaded it himself.

These are most of the browsers I’ve tested on, other browser may behave differently, generally it works the same on ubuntu as well.

IE (Internet Explorer) – does not suffer form most Data-URIs flaws, it does it by not supporting most of its features.  I don’t think that getting away with it by not supporting it is generally a good thing.

Final notes about these vectors: (repeating some from the previous post)

  1. Remember that no JavaScript is needed for any of this, all that is needed is a link. It’ll work just the same with JavaScript disabled.
  2. No server is involved either.
  3. All an attacker need is a bunch of strings and the user’s browser will do all the rest.
  4. AV won’t scan these links.
  5. Not easily blockable – no domain to block.
  6. More easily shared and distributed.
  7. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  8. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  9. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.


— Reported the bug to Google.

SVG For Fun and Phishing

What an awesome format is SVG, so powerful and so well supported by browsers. And yet it is barely being used, it’s not getting the love it deserves. Well, browsers love SVG, perhaps too much…

SVG files are like little bundles of joy. Encapsulating graphics, animations and logic. One can write a full app or game all encapsulate in one SVG file. That SVG file can be compressed into a binary file with the extension of SVGZ and browsers will still accept it.

Its way less powerful than Flash but the concept is similar – vector graphics and logic in one binary file. And like Flash SWF files – these files tend to get viral and to be re-distributed. And by that I mean, once you release your attack in the wild it can get hosted from many other servers. A good example would be, tiger.svg.

Remember the lovely Flash dog?  It can work just the same and even worse.

SVG also run from local files. By default, on Windows, SVG files are opened in IE, which will run the script with local privileges when it’s double clicked. 

Anyhow, SVGs have some flaws built in into them; many are known some are new. I will argue that even without the new flaws an SVG file is somewhat dangerous by-design. I wanted to see how easy it would be to abuse SVG for phishing. I picked an easy target – Chrome’s “Sign In” page. It was pretty easy to create an almost fully functional version of the Chrome Sign In page.

Check it out only 5kb of SVG “Image”

Compressed as an SVGZ, only 2kb (Chrome will run it just fine) 

Note: Google already changed the appearance of this page, but it almost identical to the previous version.

This page is generally here  (google already changed the way it looks)

Letting SVG files execute JavaScript is actually the root of the problem. I’m not sure it serves any real purpose in today’s web.

A simple attack might goes like this:

  • The attacker will send the victim an email with a malicious SVG file “Checkout this cool image / animation”.
  • The victim will downloads the SVG and click / double click on it.
  • SVG is opened in the browser.
  • Attack taking place.

The JavaScript that will run will have local privileges and can easily attack the user (limited to the browser sandbox of course). It can for example – execute multiple cross-domain CSRF attacks (cookies are sent normally with every requests) and/or load multiple other attack vectors. Can be abused for spam, and that is not even illegal as long as long as the attacker doesn’t do anything too malicious.

You may be thinking “so what?!” you can script the user from an html page just as well.

There are few differences, as most users will look at SVG file as just another image.

  1. When you double click on an image to view it can’t execute anything –SVG can.
  2. SVG files get redistributed – there are numerous clip art sites that will host the evil SVG for the attacker.
  3. The malicious code embedded in SVG files will sustain after editing the file in graphic editors like Adobe Illustrator and Inkscape.

I would say that the main problem here is not what SVG files are capable of doing, and it’s more about the way they can get malicious code slip through the user’s normal defenses.

Wait there is more…

Even more fun with SVG and HTML5 Data-URI

Another great feature of HTML5 is Data-URI. Now, SVG is working great with Data-URI. Malicious SVG works amazingly great with data-URIs.

SVG encoded, as Base64 will run directly from a link.
Some might call this a feature but it can be exploited for phishing attacks.

POC of an SVG phishing attack embeded in an HTML5 Data-URI

Some of the attacker benefits are:

  1. This is just a link, no need to host anything.
  2. AV won’t scan these links.
  3. Not easily blockable – no domain to block.
  4. More easily shared and distributed.
  5. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  6. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  7. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

Everything said here is valid to all Data-URIs supported formats; notable is also text/html:
Here is an example

Actually Data-URIs have their own set of problems, which are not necessarily related to SVG. It works perfectly with SVG but the issues are more general. I’ll elaborate more in another post.

More about the demo (Chrome Sign In)

I don’t want give too many ideas for the bad guys, but the possibilities here are endless, I can already think of far more nasty vectors than this demo.

This specific demo has an image of the Google logo. I managed to create the Google logo as an SVG in about 7kb, but the Google logo in the demo is anyhow small and not too noticeable, it felt like a waste of KBs.

I found that the font used be Google for their logo is catull, which is an old style serif typeface, and is similar to … you guessed it… Georgia, that was good enough. Georgia is preinstalled on all OSs.

I know that might look horrible to fonts and esthetics lovers, but the average victim will easily fall for it.

One of the most important features of this attack is the SVG text-input. The user will need to enter his credentials somewhere.

Text-inputs are not natively available in SVG, though there were some attempts to create them.

I didn’t create fully functional text-inputs, didn’t think it’s appropriate for me to do it at this point – for various reasons. For one, I didn’t want to make it too easy to replicate this attack in the real. I’m sure that nearly perfect SVG text-inputs can relatively easily be created – one just need enough motivation.

What about mobile?

Smartphones are just fine with SVG, more on that later as well.

Some tip to keep you safe

  1. Be alert when clicking on links that direct to SVG files or Data-URIs.
  2. Don’t double-click on SVG files to preview it in your browser.
  3. Don’t preview unknown or unchecked SVG files in your browser.
  4. Don’t export SVG from Adobe Illustrator and Inkscape without knowing where it came from and making sure it has no malicious script.