Aligning artificial intelligence with moral intuitions: an intuitionist approach to the alignment problem

AI and Ethics:1-11 (2024)
  Copy   BIBTEX

Abstract

As artificial intelligence (AI) continues to advance, one key challenge is ensuring that AI aligns with certain values. However, in the current diverse and democratic society, reaching a normative consensus is complex. This paper delves into the methodological aspect of how AI ethicists can effectively determine which values AI should uphold. After reviewing the most influential methodologies, we detail an intuitionist research agenda that offers guidelines for aligning AI applications with a limited set of reliable moral intuitions, each underlying a refined cooperative view of AI. We discuss appropriate epistemic tools for collecting, filtering, and justifying moral intuitions with the aim of reducing cognitive and social biases. The proposed methodology facilitates a large collective participation in AI alignment, while ensuring the reliability of the considered moral judgments.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,774

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-05-30

Downloads
98 (#210,615)

6 months
42 (#105,704)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Dario Cecchini
North Carolina State University
Veljko Dubljevic
North Carolina State University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references