Abstract
Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing
approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human
opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning
as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic
agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent,
and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current
practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.