Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback

Proceedings of the Forty-First International Conference on Machine Learning (forthcoming)
  Copy   BIBTEX

Abstract

Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 105,824

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-04-17

Downloads
96 (#234,319)

6 months
19 (#161,207)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Eric Pacuit
University of Maryland, College Park
Rachel Freedman
Oxford University
Wesley H. Holliday
University of California, Berkeley
4 more

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references