Moral disagreement and artificial intelligence

AI and Society 39 (5):2425-2438 (2024)
  Copy   BIBTEX

Abstract

Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. _Compromise solutions_ apply a method of finding a compromise and taking information about the disagreement as input. _Epistemic solutions_ apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of _moral risk_.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,774

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-06-05

Downloads
125 (#171,492)

6 months
28 (#118,600)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Pamela Robinson
University of British Columbia, Okanagan

Citations of this work

No citations found.

Add more citations