The statement ‘the Turing Test is a good test of intelligence’ is ambiguous. On the one hand, it can be taken to mean that a given entity’s acting like an intelligent being is evidence of its having intelligence, meaning that such behavior is good but not definitive reason to ascribe intelligence to the being in question. On the other hand, it can be taken to mean a given entity’s acting like an intelligent creature is constitutive of its having intelligence, meaning that there is no difference between its being intelligent and its behaving (or able to behave) like an intelligent creature. The first disambiguation, so I will argue, is correct only if subject to heavy qualifications, and the second disambiguation, so I will argue, is unqualifiedly incorrect.
Turing introduced the concept of the Turing Test through the following hypothetical. A written but rapid exchange between two parties is occurring. One of those parties is a machine, and the other is a person. A second person is monitoring the exchange. This person knows that one of the parties involved is a person and that the other is a machine, and he also knows what each party writes and when it writes it, but he has no additional information as to either one’s identity. According to Turing, if this person is consistently able to figure out which of the two parties is the machine and which one is the person, then the machine does not have intelligence. On the other hand, if that person cannot consistently determine which of the parties is the machine, then that machine has intelligence.
We can use a variant of Turing’s own scenario to demonstrate the spuriousness of Turing’s own reasoning. Let X be the person involved in the exchange just described and let Y be the machine; and suppose that the exchange they are having concerns arithmetic. X is making arithmetical statements (e.g. “1+1=2”, “1+2=3”, “1+2≠3”), and Y is evaluating those statements. If X says “23=8”, Y responds by saying “that is correct”; and if X says “23=6”, Y responds by saying “that is incorrect.” As it happens, let us suppose, X, though an extremely gifted playwright and poet, has a blind spot for mathematics. He cannot for the life of him understand why “23=6” is false or why “23=8” is true. Consequently, when ‘conversing’ with Y, the statements that he makes are often false. But he never generates those statements mechanically. In some cases, he draws on poetic inspiration; in other cases, he considers the aesthetic properties of the inscriptions involved; sometimes he grasps the mathematical principles but misapplies them; and occasionally he grasps those principles and correctly applies them. In each case, a great of intelligence is involved, but it is usually either the wrong kind of intelligence or the right kind of intelligence but applied in the wrong way.
Turing would have us believe that, in this exchange, Y is the intelligent one and X is the unintelligent one. But that is the exact opposite of the truth. The reason X is so often wrong is that he is thinking too much, albeit the wrong ways; and the reason Y is always right is that its behavior is completely reflexive, with the qualification that, thanks to the intelligence of Y’s creator, the reflexes in question track the relevant body of truths. What makes a creature intelligent is not what it does, contrary to what Turing assumes, but why it does it. In some contexts, intelligence leads to the wrong behaviors and its absence leads to the right ones. To be sure, intelligence is more likely than its absence to generate the right behaviors (including the right statements). But given a sufficiently constricted context, an unintelligent or minimally intelligent being can outperform one that is hyper-intelligent: someone with little mathematical intelligence can be trained to produce sums and products faster than Gauss could and will therefore do a better job of passing the corresponding Turing Tests. In certain contexts, certain behaviors are relatively likely to be engaged in only by intelligent creatures, and certain behaviors may therefore be relatively good evidence of intelligence; but there is no single form of conduct engaged in by intelligent beings that cannot be mimicked by unintelligent ones, and for this reason intelligence is never identical with a tendency to engage in a certain kind of overt behavior in a certain kind of context. So yes—Turing Tests are sometimes good tests of intelligence, but only in the sense that they may provide reasonable strong evidence of intelligence, not in the sense they provide definitive proof of it.
Like us, John Searle holds that overt behavior is nothing more than evidence of intelligence, and he therefore rejects Turing’s contention to the contrary. Unfortunately, Searle’s defense of his own position is a failure and actually gives undeserved credence to Turing’s position. Searle asks to suppose that, inside of a giant robot (or robot-like carapace), there lives a man who speaks no Chinese but who has access to Chinese dictionaries and pronunciation manuals, and the like. Despite not speaking a word of Chinese himself, this man has become extremely adept at using these resources to generate decrypt Chinese statements and respond to them with content-appropriate Chinese utterances of his own, which are electronically broadcast in a metallic voice to the external world, making it seem to those in the external world that there exists a Chinese-speaking robot. According to Searle, a consequence of Turing’s position is that there really does exist a Chinese-speaking robot; and Searle takes it for granted that those who hold this are simply wrong.
The problem is that the behavior of this ‘robot’ (i.e. the metallic husk with the man inside it) is intelligent. After all, the sounds that it produces are guided by an understanding of what linguistic symbols mean and of what those meanings make it appropriate to say, and those who believe the ‘robot’ to be intelligent are not entirely off the mark. True—those people are very wrong as to the mechanisms involved. They believe electrical circuity, as opposed to biological wetware, to be doing all the work. But they are right about intelligence being responsible for the robot’s behavior. What about their belief that the robot speaks Chinese? Searle takes it for granted that this belief is false. But is it? There is somebody inside the metallic husk who is intelligently mapping English statements onto Chinese statements and vice versa, and this person’s behavior is responsible for the noises leaving the robot-husk. The essence of a given person’s supposedly mistaken belief that the ‘the robot’ speaks Chinese is the correct belief the sounds that come out of the robot are intelligent, as opposed to mechanical, responses to the sounds that are directed at it.
In conclusion, although a tendency to engage to engage in certain forms of behavior in certain contexts can obviously be evidence of intelligence, such a disposition cannot itself be identical with intelligence, the reason being that any behavior that an intelligent creature is likely to engage in can be replicated by an unintelligent creature. Consequently, if Turing’s position is taken to be that overt behavior can be evidence of intelligence, then it is correct but trivial, and if it is taken to be that overt behavior can itself be intelligence, then it is simply false. Though motivated by an awareness of these truths, John Searle’s miscarried attempt to refute Turing ends up giving the latter’s an undeserved sheen of legitimacy.