Abstract
The functionalist conception of artificial moral agency holds that certain real-world AI systems should be considered moral agents because doing so benefits the recipients of AI actions. According to this view, human agents who are causally accountable for the morally significant actions of these AIs are deemed blameworthy or praiseworthy and may face sanctions or rewards, regardless of whether they intended the AI actions to occur. By meta-analyzing psychological experiments, this paper reveals a close alignment between the functionalist conception and the folk understanding of artificial moral agency. People treat certain real-world AI systems as moral agents even when they do not attribute consciousness or free will to them. When ordinary people attribute moral responsibility to these systems, these attributions are also distributed among the users, programmers, and manufacturers of the systems. This distribution holds even when people do not view the causal contributions of these human agents to these systems’ actions as wrongful.