Abstract
This chapter explores the propriety of incorporating crowdsourced public input when programming morally contentious decisions to be made by automated vehicles (AVs). The chapter argues that moral values are necessarily pluralistic and require diverse action plans based on mutual interpretability among moral agents within a particular (relative) perspective. Thus, for a machine to be understood as acting morally requires that it be interpreted as acting morally from within that perspective. This calls for programming context-dependent AV behaviors, rather than a uniform, ‘global’ standard. Using as a test case crowdsourced responses to the MIT Media Lab’s Moral Machine Experiment (MME), the chapter locates some potentially diverse and morally contentious AV behaviors, evaluates the relevance and feasibility of such crowdsourced responses, and explores some potential ways to incorporate these responses into public deliberations on moral AV behavior, including Value Sensitive Design and Participatory Technology Assessment. The diverse results of the MME, including identifiable pluralism among countries and regions, can serve as a useful preliminary input into these more comprehensive methods for understanding relevant moral contexts of AV behaviors.