Abstract
Dehumanization by algorithms raises important issues for business and society. Yet, these issues remain poorly understood due to the fragmented nature of the evolving dehumanization literature across disciplines, originating from colonialism, industrialization, post-colonialism studies, contemporary ethics, and technology studies. This article systematically reviews the literature on algorithms and dehumanization (n = 180 articles) and maps existing knowledge across several clusters that reveal its underlying characteristics. Based on the review, we find that algorithmic dehumanization is particularly problematic for human resource management and the future of work, managerial decision-making, consumer ethics, hard- and soft-law regulation, and basic values, including privacy and consumer rights. From the literature synthesis, we also derive the following definition of algorithmic dehumanization: the act of using algorithms and data in a way that results in the intentional or unintentional treatment of individuals and/or groups as less than fully human, thereby violating human rights, including liberty, equality, and dignity. Ultimately, we present a dehumanization avoidance model that serves as a structured research agenda and practical guide to address the challenges raised by algorithmic dehumanization. Thus, the model indicates promising pathways for future research at the intersection of AI and society and facilitates reflection on organizational processes that support improving corporate capabilities and managerial decision-making to avoid algorithmic dehumanization.