Back in 1950 one of the giants of computational theory, Alan Turing, proposed replacing the question “can machines think” (Turing was the first to even mention machine intelligence in 1947) with a test where a human chats with a machine. If a human couldn’t tell whether they were talking to a human or machine, he reasoned, then shouldn’t we consider whoever was on the other side as intelligent? For instance, if a spaceship appeared out of the sky and landed and we began chatting with whoever was inside, if they carried on a conversation with us like a sentient being, wouldn’t we consider this an intelligence? But there seems to be prejudice on our side when it comes to things we create...
ELIZA was the first computer to fool some humans in 1966, though not for the majority of people, and was the predecessor of chatbots of today. Each year there is the Loebner prize competition which runs the Turing test...so far there have been no clear winners although they are getting closer and closer (you can try chatting yourself with the 2011 Loebner prize winner Rosette here. The “More human than a human” article I put up on PhutureNews describes the day when a machine wins the Loebner grand prize. The new chatbots being created this year may give the contest a run for its money in 2012, with SuperChatBot being trained up using social media comments as a data source. I guessed the date of 2017 as the year when machines will finally beat the Turing test, and so far 91% of people agree with this prediction. Already the Cyberlover malware chatbot convinces lonely people across the web that they are chatting with a real human being, emerging as the first “valentine risk”...not very far in the future it will be impossible to tell if an email, text, or even voice conversation is with a real human being, opening up some very interesting and potentially dangerous issues.
The bigger question is, after Big Blue beating Kasparov in chess back in the 90’s, and last year another IBM creation beating the pants off humans in Jeopardy, when a machine beats the Turing Test, at what point do we need to begin considering machines as intelligent in their own right? If you prick us, do we not bleed? And when do we need to begin considering the rights of such machine intelligences...?