Abstract
Algorithms and so-called artificial intelligence are embedded within society and human lives, and these fields’ directions hold major implications for both social and technological systems. I use multiple case studies to highlight how “AI” as it currently exists fails to account for the needs, experiences, and material conditions of multiple modes of human life. Then, drawing from the perspectives and research of women, disabled people, trans communities, Black people, Indigenous people, queer folx, and other marginalized identities, I describe an interdisciplinary program which better foregrounds the lived experiential knowledge of marginalized people. Finally, I argue that in order to redress the very real material harms of “AI” systems, we must ensure that the perspectives of marginalized communities are placed at the forefront and center of conversations about them, to help us radically rethink founding assumptions about what said systems are for.