Democratizing value alignment: from authoritarian to democratic AI ethics

AI and Ethics (2024)
  Copy   BIBTEX

Abstract

Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent, and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 103,060

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-01-27

Downloads
13 (#1,362,980)

6 months
13 (#196,418)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Linus Huang
Hong Kong University of Science and Technology

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references