Skip to main content

When we think about artificial intelligence, we think of reaching its holy grailintelligence which rivals a human being’s. But how can we gauge success, and is that ultimately a worthy goal? We have been fascinated with intelligent machines—nonbiological creations that are capable of thinking and acting like biological creatures—for centuries. As early as 1637, French philosopher, mathematician and scientist René Descartes mentioned the idea in his book Discourse on Method. He wrote:

[H]ow many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.”

Starting with the Turing Test

Descartes’s thoughts are said to prefigure Alan Turing’s famous Turing test. The Turing test determines a machine’s ability to exhibit intelligence equivalent to, or indistinguishable from, that of a human being. Alan Turing introduced the test in his 1950 paperComputing Machinery and Intelligence.” Turing proposed that if a machine is designed to generate human-like responses and could pass for a human in natural language conversations, it can be said to have acquired “true intelligence.”

And true intelligence is what our scientists are after. Futurists are certain about the arrival of a technological singularity in which machines become capable of designing increasingly smarter machines. They say such an era, which would herald exponential growth in artificial intelligence, could arrive in just two or three decades. In this age of superintelligence, machines would acquire a mind and become conscious.

The Chinese Room Thought Experiment

However, not everyone agrees with this vision of the future. In 1980, John Searle, author and professor of philosophy at the University of California, Berkeley proposed the famous Chinese room thought experiment purporting to disprove the philosophical idea of “artificial general intelligence” in a machine, which Searle named “strong AI.” The thought experiment, which is very similar to the Turing test, holds that a program cannot give a computer a mind or consciousness—essential to achieving general intelligence—regardless of how intelligently the program may make it behave.

Later, John Searle challenged the claim made by Ray Kurzweil in his best-selling book The Singularity Is Near that it is possible for a computer running a program to have a “mind” and “consciousness.” Searle also presented a variant of his Chinese room argument to challenge the claims made by Kurzweil that a computer could defeat the world chess champion using its machine intelligence. In the end, the IBM Deep Blue did actually defeat Garry Kasparov. However, Searle argued that the machine was actually never playing chess, but merely manipulating “a bunch of meaningless symbols.”

Leave a Reply

Learn more about Prospus

    Please share your contact details to download the Prospus brochure.








    Join our newsletter

      Please share your contact information and we will start sending you our informative newsletter.