Virtuous vs. utilitarian artificial moral agents

AI and Society (1):263-271 (2020)
  Copy   BIBTEX

Abstract

Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act-utilitarian approach) is superior because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a 2-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well-suited for machine ethics.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,793

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2018-12-13

Downloads
177 (#133,296)

6 months
22 (#131,746)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

William A. Bauer
North Carolina State University

References found in this work

Thinking, Fast and Slow.Daniel Kahneman - 2011 - New York: New York: Farrar, Straus and Giroux.
Groundwork for the metaphysics of morals.Immanuel Kant - 1785 - New York: Oxford University Press. Edited by Thomas E. Hill & Arnulf Zweig.

View all 30 references / Add more references