VOX - The Student Journal of Politics, Economics and Philosophy
Could a machine think? By Moses Lemuel
T
his question is certainly not new. In 1950, Alan Turing devised
a test to answer it. A machine needs only to exhibit the ability to communicate like a human being to pass what 18
is now known as the Turing Test (TT). However, does it really prove that a machine thinks? In this short essay, I will examine the possibility of thinking machines with reference to the TT,
Issue X - Autumn 2009
since it has served as a useful point of reference in discussions on artificial intelligence (AI). I will find that, as with many controversial questions, there is no definite answer. Yet there is some reason to believe that a thinking machine is plausible, with a caveat that mere imitation of human behaviour is probably not enough to prove it. The TT is a modified form of a party imitation game that involves a judge, a man and a woman. A machine takes the place of one of the participants, excluding the judge. Although the rules for the versions Turing proposed are more complex, the standard interpretation of the TT is simply one where a judge communicates to a machine and a person without being able to see them. He directs a question at one of them at a time and attempts, through his questioning, to determine which of them is the machine. The machine tries to convince the judge that it is the person, while the person helps the judge identify the machine. If the machine succeeds in its goal, it passes. The rationale behind this is if a machine exhibits the ability to communicate like a person, then it would functionally be a thinking object, just like a person. This assertion is rooted in the Behavioralist school of thought, which asserts that “what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions” (Levin, 2004).
... there is some reason to believe that a thinking machine is plausible, with a caveat that mere imitation of human behaviour is probably not enough to prove it. There are many who disagree with this. John Searle has, with his ‘Chinese Room’ thought experiment, offered a seminal critique. In the experiment, he imagines being isolated in a room furnished with three sets of Chinese writing and rules written in English, which correlate the second set with the first and the third with the other two. The rules also instruct him to write the appropriate Chinese characters in response to the third set. With no knowledge of Chinese and without knowing that the three sets correspond to a set of Chinese script, a story and a set of questions respectively, he might unknowingly be able to follow the rules and write, in Chinese script, the correct answers to the questions. The answers might even be “indistinguishable from those of native Chinese speakers” (Searle, 1980). Yet, in this exercise, no understanding is involved at all. Thus, a machine might be able to “manipulate formal symbols”, as Searle was doing in the Chinese Room, but in doing so it would not possess any intentionality, which only arises from an understanding of the content (Searle, 1980). Hence, it cannot think 19
VOX - The Student Journal of Politics, Economics and Philosophy
in the sense that human beings think. Searle’s conclusion demands that a machine possess “intrinsic intentionality” (Searle, 1980) before it could be considered a thinking machine. However, there is also some practical criticism of the TT and its ability to prove a thinking machine. A referee in the first TT-inspired Loebner Prize competition held in 1991 reported negatively on it. He noted that trickery had prevailed as the winning computer program was imitating ‘whimsical conversation’, which was unfalsifiable since it was nonsensical (Shieber, 1993). Ned Block, also a referee, took this observation to its conclusion by criticising the TT itself as “a sorely inadequate test of intelligence because it relies solely on the ability to fool people”, stating that such a test is “confoundingly simple to pass” (Shieber, 1993). The sole requirement of human imitation is evidently too limited, prone to mistaking crafty programming for thinking ability. Such practical criticism and Searle’s criticism both indicates that thinking machines are not so easily realised and are perhaps unforeseeable at the current technological level. Nonetheless, AI and behavioural experts have come up with some good responses to sceptics. One is to say that understanding is precisely the manipulation of formal symbols through the application of rules, that computers do this just as a child does when she learns to add 20
(Abelson, 1980). Consequently, understanding improves simply when “more and more rules about a given content are incorporated” (Abelson, 1980). This implies that better programming might confer a computer the ability to understand, and with understanding, it would be capable of thinking.
The sole requirement of human imitation is evidently too limited, prone to mistaking crafty programming for thinking ability. However, human beings possess sensorimotor learning capabilities, which allow us to understand the world in ways that the mere application of rules on symbols in the abstract cannot. As long as machines are unable to experience the world as we do, it would be possible to maintain that they are not capable of understanding nearly as much and can therefore not be fully capable of thinking. The idea of the ‘super-robot’ has been proposed as a solution to this problem. This hypothesis accepts that understanding entails having “all of the information necessary to construct a representation of events in the outside world”, and that this must be accomplished by the mental manipulation of symbols that represent the outside world and checking them
Issue X - Autumn 2009
against the ‘rules’ established by sensory experience (Bridgeman, 1980). Naturally, what is hence needed to fulfil the vision of a thinking machine with full person-like intentionality is a robot that is capable of sensorimotor learning (Bridgeman, 1980). And as a testament to the completeness of this idea, Searle has expressed his agreement that such a robot might indeed be a thinking machine (Searle, 1980). Thus, we have arrived at a possible answer. Although our treatment of the question is certainly far from comprehensive, we can reasonably infer from it that machines could think. However, in order to prove this, it would require much more than a trial by standard TT.
Bibliography: Abelson, P. (1980). ‘Searle’s argument is just a set of Chinese symbols’, commentary/ Searle: ‘Minds, brains, and programs’. The Behavioral and Brain Sciences. 3: 424-425. Bridgeman, B. (1980). ‘Brains + programs = minds’, commentary/Searle: ‘Minds, brains, and programs’. The Behavioral and Brain Sciences. 3: 427-428.
Oppy, G. and Dowe, D. (2008). ‘The Turing Test’. The Stanford Encyclopedia of Philosophy. Available at http://plato. stanford.edu/entries/turing-test/ [Accessed 21 July 2009]. Searle, J. (1980). ‘Minds, brains, and programs’. The Behavioral and Brain Sciences. Available at http://www.bbsonline. org/documents/a/00/00/04/84/ bbs00000484-00/bbs.searle2.html [Accessed 21 July 2009]. Searle, J. (1980). ‘Intrinsic intentionality’, response/Searle: Minds, brains, and programs’. The Behavioral and Brain Sciences. 3: 450-456. Shieber, S. (1993). Lessons from a Restricted Turing Test. Available at http:// www.eecs.harvard.edu/shieber/Biblio/Papers/loebner-rev-html/loebner-rev-html.html [Accessed 21 July 2009].
_____________________________ Moses Lemuel is a third year undergraduate reading PPE at the University of York
Levin, J. (2004). ‘Functionalism’. The Stanford Encyclopedia of Philosophy. Available at http://plato.stanford.edu/ entries/functionalism/ [Accessed 21 July 2009]. 21