Abstract
In their enthusiasm for programming, computational linguists have tended to lose sight of what humansdo. They have conceived of conversations as independent of sound and the bodies that produce it. Thus, implicit in their simulations is the assumption that the text is the essence of talk. In fact, unlike electronic mail, conversations are acoustic events. During everyday talk, human understanding depends both on the words spoken and on fine interpersonal vocal coordination. When utterances are analysed into sequences of word-based forms, however, these prosodic aspects of language disappear. Therefore, to investigate the possibility that machines might talk, we propose acommunion game that includes this interpersonal patterning. Humans and machines would talk together and, based on recordings of them, a panel would appraise the relevant merit of each machine's simulation by how true to life it sounded. Unlike Turing's imitation game, the communion game overtly focuses attention, not on intelligence, but on language. It is designed to facilitate the development of social groups of adaptive robots that exploit complex acoustic signals in real time. We consider how the development of such machines might be approached