Dirty data labeled dirt cheap: epistemic injustice in machine learning systems

Ethics and Information Technology 25 (3):1-14 (2023)
  Copy   BIBTEX

Abstract

Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does the more common focus on procedural (un)fairness.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,063

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Socially disruptive technologies and epistemic injustice.J. K. G. Hopster - 2024 - Ethics and Information Technology 26 (1):1-8.
Governing (ir)responsibilities for future military AI systems.Liselotte Polderman - 2023 - Ethics and Information Technology 25 (1):1-4.
The Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2022 - Ethics and Information Technology 24 (3):1-3.
Correction to: the Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2023 - Ethics and Information Technology 25 (1):1-1.
Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.

Analytics

Added to PP
2023-07-08

Downloads
102 (#205,349)

6 months
19 (#148,073)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Gordon Hull
University of North Carolina, Charlotte

References found in this work

On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
“Ideal Theory” as Ideology.Charles W. Mills - 2005 - Hypatia 20 (3):165-184.

View all 32 references / Add more references