Abstract
Emotive artificial intelligences are physically or virtually embodied entities whose behavior is driven by artificial intelligence, and which use expressions usually associated with emotion to enhance communication. These entities are sometimes thought to be deceptive, insofar as their emotive expressions are not connected to genuine underlying emotions. In this paper, I argue that such entities are indeed deceptive, at least given a sufficiently broad construal of deception. But, while philosophers and other commentators have drawn attention to the deceptive threat of emotive artificial intelligences, I argue that such entities also pose an overlooked skeptical threat. In short, the widespread existence of emotive signals disconnected from underlying emotions threatens to encourage skepticism of such signals more generally, including emotive signals used by human persons. Thus, while designing artificially intelligent entities to use emotive signals is thought to facilitate human-AI interaction, this practice runs the risk of compromising human-human interaction.