This past Saturday the first in a series of steps to our robotic overlords – at least in the way I view them – was accomplished when a program managed to fool a third of the judges in a Turing Test competition in London. The fictitious persona, Eugene Goostman, was created by software engineers Vladimir Veselov (Russian) and Eugene Demchenko (Ukrainian).
Goostman purported to be a 13 year old Ukranian boy who had a penchant for hamburgers and candy. But despite a third of the judges being convinced that Eugene was a typical teenager, the artificial intelligence had no experiences that would give it a true reference point with which to fool people.
But just imagine when hardware catches up with software advances to where they can sample the world around them in more human ways. Will they be able to describe a bouquet of wine and the taste of a nice ribeye (sorry for the imagery, vegetarians) that will not only fool the biological entities, but will they serve as a reference point for our tastes? Maybe Siri and her ilk will be recommending restaurants not only on proximity and ratings, but perceived personal experience as well.
But where could it lead from there? Well, pick your favorite robotic overlord path, I’m sure they all lead to human dominance. However, I’m hoping to watch the sentient apes battle the robots for the crown, and then we can pick off the beaten down winner.
The bad news? Ape sentiency is not coming along fast enough to be a threat to Skynet. So, if we can do anything to speed up the arrival of the Master Simian race, we should probably be doing that. I’m not sure we have the luxury of waiting for our ever so unlikely allies…
The Turing test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine’s ability to render words into audio