What's Wrong with Machine Bias

Ergo: An Open Access Journal of Philosophy 6 (2019)
  Copy   BIBTEX

Abstract

Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,619

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
What Makes Discrimination Wrong?Paul de Font-Reaulx - 2017 - Journal of Practical Ethics 5 (2):105-113.

Analytics

Added to PP
2019-09-27

Downloads
310 (#87,468)

6 months
24 (#126,707)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Clinton Castro
University of Wisconsin, Madison

References found in this work

No references found.

Add more references