Marking the passing of the Turing Test
According to various news outlets over the last day or so (see, for example, The Register), a team of programmers have crafted a chat bot that is able to pass the famous Turing Test. I think I probably wouldn’t give this a true “pass” mark, though it is an important development.
In my understanding of the test, your computer has to fool 30% of a random sample of people into believing they’re talking to someone human. I think it’s implicit in this definition that the someone human is your average Joe. It seems to be on the verge of cheating to explain away that your chat program doesn’t know much about the world and speaks in slightly broken English because he’s a 13-year-old Ukrainian boy. Otherwise, we could all say that our 1-year-old baby has been at the keyboard and pass the test quite easily.
It looks as though expert systems for fooling humans into believing their performance as fellow minds are definitely in the near future, and there have been worries about them tricking us into giving away our security. Presumably this is mainly a problem for anyone who would already give away their security to someone who talks them into it? We can rest easy (at least, a bit easier), though, as Eugene shows that they’re not here yet.
Unless you’re in the habit of handing your credit card details to 13-year-old Ukrainian boys…
Update, 11th June 2014
As the information about this comes out (such as David Allen Green’s piece), it seems that the “passing” is mainly an optimistic press release that has been jumped on by the media.
So, what do you think ?