Friday, June 11, 2010

Fooling Turing Tests for Chats with Bots

As always, I want to be upfront that the opinions in this posting are only mine and not official statements made by Intel.

Way back in college, I came across the program called Eliza. If you haven't ever encountered it, you simply type messages to it and it types messages back, just like a person on a chat-site. The program is realistic enough that people have been known to treat it as a real person. Therein lies an interesting question. How do you tell the "person" you are talking to is a real person and not a computer? That question is so important, it is called the "Turing Test".

The Turing Test basically says a judge is allowed to talk (as one does in a chat-site by typing messages back-and-forth) to two contestants, one of which is human and the other a computer. The computer loses if is is properly identified as a computer, but if the computer is misidentified as a human it wins.

Well, in our world, there are lots of variations on chat sites, where we type messages to people rather than talk to them. Some of them are social like Facebook and Twitter. Have you met people on one of those sites that you haven't yet met in real life? Are you sure they are for real? They aren't always real. There are "bots" on these sites whose sole job is to impersonate a person and in doing so get unsuspecting users to click on malware links.

We see the result of these from time-to-time, when there are outbreaks of tainted links circulating. When that happens, people post warnings not to click on links attached to messages like "Is this really you in this picture?" or "ha, ha, this is a funny one".

Fortunately, most of these attacks are simple. The bots are not very sophisticated impersonators. Many of us have learned not to click on links from people we don't already trust and even from them only links that are in line with info we already trust from them. We apply our personal versions of the Turing Test relatively efficiently. This is partially because we are expecting these bots.

However, let's imagine someone who wants to cheat and win a Turing Test. Suppose someone wanted to insert a "computer" into the contest, but have it be real enough to fool people. One simple way of cheating is to have the "computer" be a real person. There was a famous chess-playing computer built just that way called "the Turk". Inside this computer there was actually a small chess playing person moving the levers.

As discussed here in Dark Reading or here in their PDF paper, recently some researchers figured out a way to do a variation on this cheat in a chat situation. Instead of hiding a human in the computer. They made the computer tie two humans together. That way both humans were talking to other humans, but both thought they were talking to the person who the computer was pretending to be. On both sides of the chat, a human was moving the levers. However, on neither side was the person talking to whom they thought they were. Both chatter thought they were talking to the fake ID created for the bot, rather than the real person to whom the bot forwarded their conversation. The bot is executing a classic man-in-the-middle attack.

However, even though the bot was primarily forwarding the conversations between two humans, it was still a bot, and it was able to deliver malicious payloads, either send a link which could have been to malware (but wasn't since this was a research project) or ask a phishing question (which also was a benign surrogate question for the research purposes). The bot was able to get high response rates to both forms of attacks, because the attack was in the context of an otherwise human-to-human conversation, and thus was camouflaged. The exact details of the attacks and how they were inserted and success measured are in the PDF paper or in this summary.

The effectiveness of these attacks while worrisome are dwarfed by a potential highlighted but not explored in the paper. A similar man-in-the-middle attack could be executed on online banking help chat sessions. If a bot is inserted in a banking help conversation, the bot could potentially be similarly effective at phishing details from the users. The users would be expecting to be asked questions to validate them to the system, extra questions about personal details would not be surprising. Similarly, the bot could insert questions to the help side that might help the attacker move money. Again, the help agent would not be surprised at questions on how to do various actions, as the user was calling with troubles and the helper is trained to ask "is there anything else I can assist you with?"

These results should be particularly scary for people worried about phishing attacks. The technology involved is not sophisticated. The idea while creative was not far fetched and had been predicted.

I am a prophet!!!! I eluded to this at #phneutral http://bit.ly/9BgN6L via @intel_chris and @darkreadingless than a minute ago via TweetDeck

That means there are probably malware writers out there who are already trying to figure out how to incorporate this attack into their repertoire. The key thing about this paper is this kind of attack is no longer just an idea. There is a real proof of concept (PoC) implementation. It will not be hard for others to replicate this work.

No comments:

Post a Comment