Abstract
This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the dangers, of AI. This new framework of “Power-Sharing Liberalism” is a philosophy that restores protections of positive liberties to liberalism. As we deploy it here, it helps shape a more comprehensive (and we would contend, more accurate) understanding of both risk and opportunity introduced by AI. To lay out how Power-Sharing Liberalism can be applied to AI governance, we take four steps. First, we define central concepts in the field of AI governance, disambiguating between forms of technological harms and risks. Second, we review current normative frameworks around the globe and argue that Power-Sharing Liberalism is a better fit for governing AI. Third, we walk through the six governance tasks that should be accomplished by any governance framework and analyze them through the Power-Sharing Liberalism framework. Based on that analysis, we make 17 recommendations for the governance of AI—including transformative investments in public goods, personnel, and the sustainability of democracy itself. Finally, we discuss concrete proposals for implementing those recommendations.