Abstract
John Haugeland, in Artiftcial Intelligence: The Very Idea, predicts that it will not be possible to create systems whieh understand discourse about people unless those systems share certain characteristics of people, specifically what he calls “ego involvement”. I argue that he has failed to establish this. In fact, I claim that his argument fails at two points. First, he has not established that it is impossible to understand ego involvement without simulating the processes which underlie it. Second, even if the first point be granted, the conclusion does not follow, for it is possible to simulate ego involvement without having it.