Abstract
Finds fault with Turing's answer to the question, ‘Can a computer think’? Turing believed that if the answers given by a computer and a person leave an interpreter unable to discriminate between them, then computers must be said to be able to think. The author objects that in order for a computer to think, it must mean something by the answer it gives. Consequently, without evidence for the fact that a computer not merely possesses the syntax of the language it is responding in but also has a semantics, the findings of Turing's Test cannot be used as evidence for the claim that computers can, even in theory, think. Understanding the semantics of an object or creature requires that the interpreter be able to observe what in the world that is shared by interpreter and interpretant causes the latter's responses; having a semantics requires a history of engagement with others and with objects in the world. Turing's test for thought is inadequate, according to the author, not because it restricts the evidence to what can be observed about the computer from the outside, but because it does not allow enough of what is outside to be observed.