blog POST

What does the Turing Test prove?


To say that a given entity ‘passes the Turing Test’ is to say that, in some context or other, its behavior is indistinguishable from that of a being whose conduct is guided by intelligence. Turing believed, for reasons to be identified forthwith, that there is no difference between acting like an intelligent entity and being an intelligent being. Turing’s answer to the question ‘is the Turing Test a good test of intelligence?’ is therefore ‘yes.’ In this paper, Turing’s position will be examined and refuted. I will start by examining the hypothetical scenario through Turing himself attempts to motivate what he believes the answer to this question to be, and I will use a variant of that same scenario to show that Turing’s reasoning is spurious and, moreover, that his position is false. Then I will examine John Searle’s attempt to refute Turing’s position. It will be shown that, although Searle is right to regard Turing’s position as false, Searle’s attempt to refute Turing’s position has little force.

Turing introduced the concept of the Turing Test through the following hypothetical. A written but rapid exchange between two parties is occurring. One of those parties is a machine, and the other is a person. A second person is monitoring the exchange. This person knows that one of the parties involved is a person and that the other is a machine, and he also knows what each party writes and when it writes it, but he has no additional information as to either one’s identity. According to Turing, if this person is consistently able to figure out which of the two parties is the machine and which one is the person, then the machine does not have intelligence. On the other hand, if that person cannot consistently determine which of the parties is the machine, then that machine has intelligence.

We can use a variant of Turing’s own scenario to demonstrate the spuriousness of Turing’s own reasoning. Let X be the person involved in the exchange just described and let Y be the machine; and suppose that the exchange they are having concerns arithmetic. X is making arithmetical statements (e.g. “1+1=2”, “1+2=3”, “1+2≠3”), and Y is evaluating those statements. If X says “23=8”, Y responds by saying “that is correct”; and if X says “23=6”, Y responds by saying “that is incorrect.” As it happens, let us suppose, X, though an extremely gifted playwright and poet, tends to overthink trivial matters and therefore has trouble producing correct statements of arithmetic. Consequently, when ‘conversing’ with Y, the statements that he makes are often false. But he never generates those statements mechanically.

Turing would have us believe that, in this exchange, Y is the intelligent one and X is the unintelligent one. But that is the exact opposite of the truth. The reason X is so often wrong is (so we may suppose) that he is thinking too much, albeit the wrong ways; and the reason Y is always right is that its behavior is completely reflexive, with the qualification that, thanks to the intelligence of Y’s creator, the reflexes in question track the relevant body of truths. What makes a creature intelligent is not what it does, contrary to what Turing assumes, but why it does it. In some contexts, intelligence leads to the wrong behaviors and its absence leads to the right ones. Given a sufficiently constricted context, an unintelligent or minimally intelligent being can outperform one that is hyper-intelligent: someone with little mathematical intelligence can be trained to produce sums and products faster than Gauss could and will therefore do a better job of passing the corresponding Turing Tests. There is no single form of conduct engaged in by intelligent beings that cannot be mimicked by unintelligent ones, and for this reason intelligence is never identical with a tendency to engage in a certain kind of overt behavior in a certain kind of context. So a given Turing Test provides, at most, highly defeasible, context-internal evidence of intelligence, and under no circumstances can a given creature’s passing such a test be regarded as definitive proof of intelligence; and the Turing Test is therefore not in any significant sense a good test of intelligence.

Like us, John Searle holds that overt behavior is nothing more than evidence of intelligence, and he therefore rejects Turing’s contention to the contrary. Unfortunately, Searle’s defense of his own position is a failure and actually gives undeserved credence to Turing’s position. Searle asks to suppose that, inside of a giant robot (or robot-like carapace), there lives a man who speaks no Chinese but who has access to Chinese dictionaries and pronunciation manuals, and the like. Despite not speaking a word of Chinese himself, this man has become extremely adept at using these resources to generate decrypt Chinese statements and respond to them with content-appropriate Chinese utterances of his own, which are electronically broadcast in a metallic voice to the external world, making it seem to those in the external world that there exists a Chinese-speaking robot. According to Searle, a consequence of Turing’s position is that there really does exist a Chinese-speaking robot; and Searle takes it for granted that those who hold this are simply wrong.

The problem is that the behavior of this ‘robot’ (i.e. the metallic husk with the man inside it) is intelligent. After all, the sounds that it produces are guided by an understanding of what linguistic symbols mean and of what those meanings make it appropriate to say, and those who believe the ‘robot’ to be intelligent are not entirely off the mark. Although these people are wrong about the specific mechanisms involved, they are right about intelligence being responsible for the robot’s behavior. What about their belief that the robot speaks Chinese? Searle takes it for granted that this belief is false. But is it? There is somebody inside the metallic husk who is intelligently mapping English statements onto Chinese statements and vice versa, and this person’s behavior is responsible for the noises leaving the robot-husk. The essence of a given person’s supposedly mistaken belief that the ‘the robot’ speaks Chinese is the correct belief the sounds that come out of the robot are intelligent, as opposed to mechanical, responses to the sounds that are directed at it.

In conclusion, although a tendency to engage to engage in certain forms of behavior in certain contexts can obviously be evidence of intelligence, such a disposition cannot itself be identical with intelligence, the reason being that any behavior that an intelligent creature is likely to engage in can be replicated by an unintelligent creature. Consequently, if Turing’s position is taken to be that overt behavior can be evidence of intelligence, then it is correct but trivial, and if it is taken to be that overt behavior can itself be intelligence, then it is simply false. Though motivated by an awareness of these truths, John Searle’s miscarried attempt to refute Turing ends up giving the latter’s an undeserved sheen of legitimacy.

0 views0 comments

© 2020 - Philosophypedia| All Rights Reserved | Designed With ❤ Wibitech