Abstract
Artificial intelligence (“AI”) holds immense promise to revolutionize and enhance various facets of human society. This transformative potential, however, is juxtaposed against profound ethical and societal challenges. This paper investigates a range of inherent risks posed by AI and potential attendant harm to individuals and our society. Through an analysis of AI regulatory frameworks in the United States and the European Union, the paper proposes a joined-up approach to AI governance that places bioethical principles at the heart of AI regulations. This is, in part, because prescriptive process-based rules, typical of current regulatory approaches, cannot meaningfully protect and promote human dignity and autonomy, integrity, justice, and well-being. Bioethics principles, which embody value determinations that most modern societies embrace as indispensable to realizing their conceptions of individual and “common good,” can adequately frame the objectives of AI regulations and set the normative goals for their subsequent interpretation, implementation, and enforcement ensuring that AI technologies enhance societal good. This joined-up approach further facilitates a context-sensitive implementation of AI regulations, adeptly addressing unforeseen challenges and ethically complex scenarios that legal norms and regulations alone fail to foreshadow.