Friday, June 25, 2010

When Will We Wake Up?

As always, these thoughts and opinions are mine alone and not official pronouncements, policies, or statements from Intel. Note that the examples used in this posting are not unique and not the most extreme cases. They are simply ones that have become lodged in my mind.
This is the other half of the issue I just wrote about in this post, where I addressed the need for people to be conscious of how choosing convenience might be lowering their security and privacy.

Here I'd like to ask the question from the implementers point of view. In particular, we have long known that some systems are easy to crack. I am going to list some easy flaws of convenience and ask why haven't we learned to avoid them.
  1. Obvious default passwords and insecure default settings: In high school my friends and I were taught on a large computer and given the instruction manual for the operating system, compilers, and so forth. In those books were the instructions on how to run the system that assigned accounts and passwords and the examples used names like "password" for the system accounts. Gleefully, we tried those passwords, and no one had ever changed them. They were the same as in the book. Since, no one had never heard of cracking accounts back them, those administrators could be forgiven.

    However, in the 2000's when I bought a router, leaving the name as "linksys" and the password as "administrator" would have been tragically foolish. Still the recommended installation procedure did not change those names and in fact connected one to the internet as a required early part of the process. I changed mine, of course, as soon as I had the router to the point where I could do so. However, I'm sure there are many extremely insecure wireless routers out there. Everywhere I go, I find linksys routers, my laptop wants to connect to. If routers become a major pool of malware infections, it will not surprise me.

    Much more security aware is the way that the F-Secure SSH client automatically builds a random number when you install and first use it. The security is turned on right from the beginning and there is no worry that someone will use an insecure password and none for the person to remember.

  2. Back doors and escapes with unlimited power: Many people have spent a lot of time figuring out how to prevent the browser from down-loading .exe files and running them. However, this whole time, one could down-load a .pdf and in it have commands that would down-load the files we were trying to prevent. There are some security provisions built-in, but they are circumventable by social engineering. Sadly, this is not a flaw in some .pdf implementation, but a designed part of the spec.

    Building in an escape hatch or back door is an easy way to circumvent the limitations of a product. However, when that escape allows arbitrary code execution, you have abdicated control to those who would abuse your application.

  3. Installations that require too much privilege: Although this is slowly getting better, far too many applications still get installed with too much access to the system. This is definitely a convenience issue. It is time consuming to get the minimum access an application really needs, especially if you don't know whether someone else sharing the computer might need another feature and more privileges. Users will almost always opt for installing all the features in the most unrestricted fashion when given the choice. That is much more "convenient" than picking a narrow set of features and restricting them and then finding out later one needs more. Especially, in those cases where expanding the privileges might require stopping the application mid-task (or worse rebooting the entire system). The user will always opt for the convenient choice.

  4. Systems that require restarting to reset: Even worse that restarting the application to expand its privileges are those applications that have to be restarted on a regular basis. It makes sense that a system that is holding onto some personal information (e.g. the browser session visiting your bank or the system that allows you to send emails) wants to time-out so that one doesn't accidentally walk away leaving that information unprotected. However, other applications fail after running for a while for no obvious reason. My assumption that this is due to careless resource management and that some resource is eventually exhausted and the application falls over or simply hangs. However, whatever the cause, this practice has tended to train users to expect to re-login to various applications on a regular basis. Thus users are much more cavalier about entering their security information than they should be.

  5. Loading obscure software to build unimportant candy: A pretty user interface is appealing, but many applications put too much emphasis on sizzle rather than functionality. A common symptom of this issue is the web sites that seemed to require a new browser extension for each site. Again, this has improved somewhat, but still in the process, many users were "trained" to download all sorts of software to make their web applications work, and the malware writers took full advantage of this loading first malware via such links and more recently fake malware scanners that were actually malware

    Similar to this problem was the password manager I wanted to download that required loading a completely new-to-me language (groovy) into my browser to run it. Here was a system that I was using to attempt to increase my security, but which required me to perform a potentially unsafe action in able to do so. While password security isn't exactly candy, it isn't core functionality. It certainly isn't obvious why one would need to download a new language onto one's computer to get the browser to export passwords.
These are just some examples of lessons as developers we should have learned where we have traded user security for user convenience. Admitted, convenience is a nice thing. However, we have to be more protective of those who are depending upon us. We made the mess that allows malware to flourish. We could do our part to clean it up.

Convenience Versus Security

As always, these thoughts and opinions are mine alone and not official pronouncements, policies, or statements from Intel.
For a long time, we geeks who built the internet (and I can't take any significant credit for that) have lived in a fairy tale sandcastle in the sky. We believed in the essential goodness of people and thereby developed our hardware and software with our main focus on what what convenient and not what was secure. We also made that worse by concentrating on features rather than stability and lack of bugs.

In the security field, the bugs have gotten a fair amount of attention. People are very aware of the buffer overruns and other ways of breaking software like browsers to introduce malware into your computer or your network.

However, the convenience factor needs equal attention. Some of those lessons have been learned. When I administered my own linux server back in 1995, I learned the hard way (i.e. by being cracked and having a rootkit installed) about the importance of closing up and securing ports. Having an open telnet port was convenient for logging into my server not only for me, but for all the miscreants who thought access and using my computer might be fun or profitable.

Still, this lesson needs to be repeated over-and-over again. The sites the are open to the attacks in this video have not properly secured their assets. If you leave your property open and unlocked, someone will eventually "borrow" it or play a prank on you through it or do something else you don't want and hadn't intended. Especially, if the info on how to do so is on popular sites like bitrebels.

So, when you buy that new webcam or baby-monitor think before you expose it to the internet. The out-of-the-box configuration was probably designed by geeks who wanted to make it convenient for you to use, not to keep your private information private. That doesn't mean you can't make the device secure, just that you will need to do extra work to do so. Work that might not be detailed in the instruction book that comes with the device.

Although we geeks who design and build such devices emphasize convenience and features as that's what we've trained ourselves to do and what the market has traditionally rewarded, if consumers want safer more secure devices, we will make them. Companies are already realizing the need for that. The culture is ripe to grow and spread. Consumers just have to make informed choices that demonstrate that preference.

If you are an implementer and want to ponder some of the ways, we have helped users trade security for convenience, try reading this.

Tuesday, June 15, 2010

Viruses on Linux

As always, I want to reinforce that these are my personal opinions and not the stated policies, recommendations, or positions of Intel.
It has been discovered that an Open Source application that runs on Linux has had some of its repositories cracked and some of them were serving a malware infected version, as reported here and here. Now, while some has reacted like this reporting is an attempt at spreading FUD (fear, uncertainty, and doubt) among potential Linux users, it is simply one more incident showing that there is no security silver bullet.

Simply choosing a more secure OS is not sufficient to protect against all forms of attacks. Complacency will always leave one vulnerable. Reading your email on a Linux box will not prevent spam or phishing emails from entering your mailbox. If you click on an infected .pdf file, you probably won't get infected because the malware was probably customized for Windows. However, that doesn't mean someone couldn't infect a .pdf file with a Linux virus. Someday, someone will. Moreover, if the attack wasn't attempting to infect your system, but simply to get you to install a tracking cookie in your browser, Linux is no protection at all. Running Linux doesn't magically make one immune to social engineering.

This isn't a criticism of Linux. Linux out-of-the-box comes generally configured to be more secure than typical Windows desktop systems do. A good example is that on Linux systems root (superuser) access is done via a separate account rather than one's normal account. Many other features of Linux are specifically designed to improve security also.

However, Linux systems also often have more to configure and more to exploit. A Linux system will often run ssh and ftp servers and not just clients. Running nfs or samba servers on Linux is also very common. You might even run http or sql servers. Server systems require more complex and careful administration, because servers were designed to share their resources. Sharing requires more attention. Sharing opens avenues for attack.

If you button your Linux system up, it can be secure. However, if you run it with the telnet, ftp, ssh, and nfs ports all open to the world and without any security on them, you will eventually find more viruses and rootkits on your system than you can imagine. Believe me. I've been there. In fact, to my knowledge, the only system I've ever run that has been cracked was a Linux box. It was in part due to configuring the system to be more convenient rather than more secure.

I think that is appropriately instructive that the word rootkit derives from the name of the administrative account on Unix derivative systems. The first worm was also designed to attack Unix (not Windows) systems. Likewise, Ken Thompson gave as his Turing Award lecture how to embed a Trojan Horse in the C compiler, which shows simply compiling from source is also not a panacea either.

So, enjoy the security Linux is able to give you. Open Source is a good thing. There is ample reason why many cryptographers prefer trusting an open source algorithm. However, don't assume running Linux without appropriately configuring it makes you magically immune to attack. Life isn't quite that simple. Security still requires work. Always will.

Friday, June 11, 2010

Fooling Turing Tests for Chats with Bots

As always, I want to be upfront that the opinions in this posting are only mine and not official statements made by Intel.

Way back in college, I came across the program called Eliza. If you haven't ever encountered it, you simply type messages to it and it types messages back, just like a person on a chat-site. The program is realistic enough that people have been known to treat it as a real person. Therein lies an interesting question. How do you tell the "person" you are talking to is a real person and not a computer? That question is so important, it is called the "Turing Test".

The Turing Test basically says a judge is allowed to talk (as one does in a chat-site by typing messages back-and-forth) to two contestants, one of which is human and the other a computer. The computer loses if is is properly identified as a computer, but if the computer is misidentified as a human it wins.

Well, in our world, there are lots of variations on chat sites, where we type messages to people rather than talk to them. Some of them are social like Facebook and Twitter. Have you met people on one of those sites that you haven't yet met in real life? Are you sure they are for real? They aren't always real. There are "bots" on these sites whose sole job is to impersonate a person and in doing so get unsuspecting users to click on malware links.

We see the result of these from time-to-time, when there are outbreaks of tainted links circulating. When that happens, people post warnings not to click on links attached to messages like "Is this really you in this picture?" or "ha, ha, this is a funny one".

Fortunately, most of these attacks are simple. The bots are not very sophisticated impersonators. Many of us have learned not to click on links from people we don't already trust and even from them only links that are in line with info we already trust from them. We apply our personal versions of the Turing Test relatively efficiently. This is partially because we are expecting these bots.

However, let's imagine someone who wants to cheat and win a Turing Test. Suppose someone wanted to insert a "computer" into the contest, but have it be real enough to fool people. One simple way of cheating is to have the "computer" be a real person. There was a famous chess-playing computer built just that way called "the Turk". Inside this computer there was actually a small chess playing person moving the levers.

As discussed here in Dark Reading or here in their PDF paper, recently some researchers figured out a way to do a variation on this cheat in a chat situation. Instead of hiding a human in the computer. They made the computer tie two humans together. That way both humans were talking to other humans, but both thought they were talking to the person who the computer was pretending to be. On both sides of the chat, a human was moving the levers. However, on neither side was the person talking to whom they thought they were. Both chatter thought they were talking to the fake ID created for the bot, rather than the real person to whom the bot forwarded their conversation. The bot is executing a classic man-in-the-middle attack.

However, even though the bot was primarily forwarding the conversations between two humans, it was still a bot, and it was able to deliver malicious payloads, either send a link which could have been to malware (but wasn't since this was a research project) or ask a phishing question (which also was a benign surrogate question for the research purposes). The bot was able to get high response rates to both forms of attacks, because the attack was in the context of an otherwise human-to-human conversation, and thus was camouflaged. The exact details of the attacks and how they were inserted and success measured are in the PDF paper or in this summary.

The effectiveness of these attacks while worrisome are dwarfed by a potential highlighted but not explored in the paper. A similar man-in-the-middle attack could be executed on online banking help chat sessions. If a bot is inserted in a banking help conversation, the bot could potentially be similarly effective at phishing details from the users. The users would be expecting to be asked questions to validate them to the system, extra questions about personal details would not be surprising. Similarly, the bot could insert questions to the help side that might help the attacker move money. Again, the help agent would not be surprised at questions on how to do various actions, as the user was calling with troubles and the helper is trained to ask "is there anything else I can assist you with?"

These results should be particularly scary for people worried about phishing attacks. The technology involved is not sophisticated. The idea while creative was not far fetched and had been predicted.

I am a prophet!!!! I eluded to this at #phneutral http://bit.ly/9BgN6L via @intel_chris and @darkreadingless than a minute ago via TweetDeck

That means there are probably malware writers out there who are already trying to figure out how to incorporate this attack into their repertoire. The key thing about this paper is this kind of attack is no longer just an idea. There is a real proof of concept (PoC) implementation. It will not be hard for others to replicate this work.